autonomous transactions
I would like to propose the attached patch implementing autonomous
transactions for discussion and review.
This work was mostly inspired by the discussion about pg_background a
while back [0]/messages/by-id/CA+Tgmoam66dTzCP8N2cRcS6S6dBMFX+JMba+mDf68H=KAkNjPQ@mail.gmail.com. It seemed that most people liked the idea of having
something like that, but couldn't perhaps agree on the final interface.
Most if not all of the preliminary patches in that thread were
committed, but the user interface portions were then abandoned in favor
of other work. (I'm aware that rebased versions of pg_background
existing. I have one, too.)
The main use case, in a nutshell, is to be able to commit certain things
independently without having it affected by what happens later to the
current transaction, for example for audit logging.
My patch consists of three major pieces. (I didn't make them three
separate patches because it will be clear where the boundaries are.)
- A API interface to open a "connection" to a background worker, run
queries, get results: AutonomousSessionStart(), AutonomousSessionEnd(),
AutonomousSessionExecute(), etc. The communication happens using the
client/server protocol.
- Patches to PL/pgSQL to implement Oracle-style autonomous transaction
blocks:
AS $$
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
FOR i IN 0..9 LOOP
START TRANSACTION;
INSERT INTO test1 VALUES (i);
IF i % 2 = 0 THEN
COMMIT;
ELSE
ROLLBACK;
END IF;
END LOOP;
RETURN 42;
END;
$$;
This is very incomplete and has some open technical issues that I will
discuss below. But those are all issues of PL/pgSQL, not really issues
of how autonomous sessions work.
Basically, a block that is declared with that pragma uses the autonomous
C API instead of SPI to do its things.
- Patches to PL/Python to implement a context manager for autonomous
sessions (similar to how subtransactions work there):
with plpy.autonomous() as a:
for i in range(0, 10):
a.execute("BEGIN")
a.execute("INSERT INTO test1 (a) VALUES (%d)" % i)
if i % 2 == 0:
a.execute("COMMIT")
else:
a.execute("ROLLBACK")
This works quite well, except perhaps some tuning with memory management
and some caching and some refactoring.
While the PL/pgSQL work is more of a top-level goal, I added the
PL/Python implementation because it is easier to map the C API straight
out to something more accessible, so testing it out is much easier.
The main technical problem I had with PL/pgSQL is how to parse named
parameters. If you're in PL/Python, say, you do
plan = a.prepare("INSERT INTO test1 (a, b) VALUES ($1, $2)",
["int4", "text"])
and that works fine, because it maps straight to the client/server
protocol. But in PL/pgSQL, you will want something like
DECLARE
x, y ...
BEGIN
INSERT INTO test1 (a, b) VALUES (x, y)
When running in-process (SPI), we install parser hooks that allow the
parser to check back into PL/pgSQL about whether x, y are variables and
what they mean. When we run in an autonomous session, we don't have
that available. So my idea was to extend the protocol Parse message to
allow sending a symbol table instead of parameter types. So instead of
saying, there are two parameters and here are their types, I would send
a list of symbols and types, and the server would respond to the Parse
message with some kind of information about which symbols it found. I
think that would work, but I got lost in the weeds and didn't get very
far. But you can see some of that in the code. If anyone has other
ideas, I'd be very interested.
Other than that, I think there are also other bits and pieces that are
worth looking at, and perhaps have some overlap with other efforts, such as:
- Refining the internal APIs for running queries, with more flexibility
than SPI. There have recently been discussions about that. I just used
whatever was in tcop/postgres.c directly, like pg_background does, and
that seems mostly fine, but if there are other ideas, they would be
useful for this, too.
- An exception to the "mostly fine" is that the desirable handling of
log_statement, log_duration, log_min_duration_statement for
non-top-level execution is unclear.
- The autonomous session API could also be useful for other things, such
as perhaps implementing a variant of pg_background on top of them, or
doing other asynchronous or background execution schemes. So input on
that is welcome.
- There is some overlap with the protocol handling for parallel query,
including things like error propagation, notify handling, encoding
handling. I suspect that other background workers will need similar
facilities, so we could simplify some of that.
- Client encoding in particular was recently discussed for parallel
query. The problem with the existing solution is that it makes
assign_client_encoding() require hardcoded knowledge of all relevant
background worker types. So I tried a more general solution, with a hook.
- I added new test files in the plpgsql directory. The main test for
plpgsql runs as part of the main test suite. Maybe we want to move that
to the plpgsql directory as well.
- More guidance for using some of the background worker and shared
memory queue facilities. For example, I don't know what a good queue
size would be.
- Both PL/pgSQL and PL/Python expose some details of SPI in ways that
make it difficult to run some things not through SPI. For example,
return codes are exposed directly by PL/Python. PL/pgSQL is heavily
tied to the API flow of SPI. It's fixable, but it will be some work. I
had originally wanted to hide the autonomous session API inside SPI or
make it fully compatible with SPI, but that was quickly thrown out.
PL/Python now contains some ugly code to make certain things match up so
that existing code can be used. It's not always pretty.
- The patch "Set log_line_prefix and application name in test drivers"
(https://commitfest.postgresql.org/10/717/) is helpful in testing and
debugging this.
[0]: /messages/by-id/CA+Tgmoam66dTzCP8N2cRcS6S6dBMFX+JMba+mDf68H=KAkNjPQ@mail.gmail.com
/messages/by-id/CA+Tgmoam66dTzCP8N2cRcS6S6dBMFX+JMba+mDf68H=KAkNjPQ@mail.gmail.com
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachments:
autonomous.patchtext/x-patch; name=autonomous.patchDownload+2095-29
I would love to see autonomous transactions in core.
I just have one major concern, but thankfully it's easily addressed.
There should be a way to within the session and/or txn permanently
block autonomous transactions.
This is important if you as a caller function want to be sure none of
the work made by anything called down the stack gets committed.
That is, if you as a caller decide to rollback, e.g. by raising an
exception, and you want to be sure *everything* gets rollbacked,
including all work by functions you've called.
If the caller can't control this, then the author of the caller
function would need to inspect the source code of all function being
called, to be sure there are no code using autonomous transactions.
Coding conventions, rules and discipline are all good and will help
against misuse of the feature, but some day someone will make a
mistake and wrongly use the autonomous transaction and cause unwanted
unknown side-effect I as a caller function didn't expect or know
about.
Once you have blocked autonomous transactions in a session or txn,
then any function called must not be able to unblock it (in the
session or txn), otherwise it defeats the purpose.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 30 August 2016 at 23:10, Joel Jacobson <joel@trustly.com> wrote:
There should be a way to within the session and/or txn permanently
block autonomous transactions.
This will defeat one of the use cases of autonomous transactions: auditing
Coding conventions, rules and discipline are all good and will help
against misuse of the feature, but some day someone will make a
mistake and wrongly use the autonomous transaction and cause unwanted
unknown side-effect I as a caller function didn't expect or know
about.
well, if the feature is not guilty why do you want to put it in jail?
--
Jaime Casanova www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 30 August 2016 at 20:50, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:
- Patches to PL/pgSQL to implement Oracle-style autonomous transaction
blocks:AS $$
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
FOR i IN 0..9 LOOP
START TRANSACTION;
INSERT INTO test1 VALUES (i);
IF i % 2 = 0 THEN
COMMIT;
ELSE
ROLLBACK;
END IF;
END LOOP;RETURN 42;
END;
$$;
this is the syntax it will use?
i just compiled this in head and created a function based on this one.
The main difference is that the column in test1 it's a pk so i used
INSERT ON CONFLICT DO NOTHING
and i'm getting this error
postgres=# select foo();
LOG: namespace item variable itemno 1, name val
CONTEXT: PL/pgSQL function foo() line 7 at SQL statement
STATEMENT: select foo();
ERROR: null value in column "i" violates not-null constraint
DETAIL: Failing row contains (null).
STATEMENT: INSERT INTO test1 VALUES (val) ON CONFLICT DO NOTHING
ERROR: null value in column "i" violates not-null constraint
DETAIL: Failing row contains (null).
CONTEXT: PL/pgSQL function foo() line 7 at SQL statement
STATEMENT: select foo();
ERROR: null value in column "i" violates not-null constraint
DETAIL: Failing row contains (null).
CONTEXT: PL/pgSQL function foo() line 7 at SQL statement
this happens even everytime i use the PRAGMA even if no START
TRANSACTION, COMMIT or ROLLBACK are used
--
Jaime Casanova www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Aug 31, 2016 at 6:41 AM, Jaime Casanova
<jaime.casanova@2ndquadrant.com> wrote:
On 30 August 2016 at 23:10, Joel Jacobson <joel@trustly.com> wrote:
There should be a way to within the session and/or txn permanently
block autonomous transactions.This will defeat one of the use cases of autonomous transactions: auditing
My idea on how to deal with this would be to mark the function to be
"AUTONOMOUS" similar to how a function is marked to be "PARALLEL
SAFE",
and to throw an error if a caller that has blocked autonomous
transactions tries to call a function that is marked to be autonomous.
That way none of the code that needs to be audited would ever get executed.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
2016-08-31 15:09 GMT+02:00 Joel Jacobson <joel@trustly.com>:
On Wed, Aug 31, 2016 at 6:41 AM, Jaime Casanova
<jaime.casanova@2ndquadrant.com> wrote:On 30 August 2016 at 23:10, Joel Jacobson <joel@trustly.com> wrote:
There should be a way to within the session and/or txn permanently
block autonomous transactions.This will defeat one of the use cases of autonomous transactions:
auditing
My idea on how to deal with this would be to mark the function to be
"AUTONOMOUS" similar to how a function is marked to be "PARALLEL
SAFE",
and to throw an error if a caller that has blocked autonomous
transactions tries to call a function that is marked to be autonomous.That way none of the code that needs to be audited would ever get executed.
I like this idea - it allows better (cleaner) snapshot isolation.
Regards
Pavel
Show quoted text
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Aug 31, 2016 at 2:50 AM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:
- A API interface to open a "connection" to a background worker, run
queries, get results: AutonomousSessionStart(), AutonomousSessionEnd(),
AutonomousSessionExecute(), etc. The communication happens using the
client/server protocol.
I'm surprised by the background worker. I had envisioned autonomous
transactions being implemented by allowing a process to reserve a
second entry in PGPROC with the same pid. Or perhaps save its existing
information in a separate pgproc slot (or stack of slots) and
restoring it after the autonomous transaction commits.
Using a background worker mean that the autonomous transaction can't
access any state from the process memory. Parameters in plpgsql are a
symptom of this but I suspect there will be others. What happens if a
statement timeout occurs during an autonomous transaction? What
happens if you use a pl language in the autonomous transaction and if
it tries to use non-transactional information such as prepared
statements?
--
greg
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 31 August 2016 at 21:46, Greg Stark <stark@mit.edu> wrote:
On Wed, Aug 31, 2016 at 2:50 AM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:- A API interface to open a "connection" to a background worker, run
queries, get results: AutonomousSessionStart(), AutonomousSessionEnd(),
AutonomousSessionExecute(), etc. The communication happens using the
client/server protocol.
Peter, you mention "Oracle-style autonomous transaction blocks".
What are the semantics to be expected of those with regards to:
- Accessing objects exclusively locked by the outer xact or where the
requested lockmode conflicts with a lock held by the outer xact
- Visibility of data written by the outer xact
?
Also, is it intended (outside the plpgsql interface) that the
autonomous xact can proceed concurrently/interleaved with a local
backend xact? i.e. the local backend xact isn't suspended and you're
allowed to do things on the local backend as well? If so, what
handling do you have in mind for deadlocks between the local backend
xact and the bgworker with the autonomous xact? I'd expect the local
backend to always win, killing the autonomous xact every time.
I'm surprised by the background worker. I had envisioned autonomous
transactions being implemented by allowing a process to reserve a
second entry in PGPROC with the same pid. Or perhaps save its existing
information in a separate pgproc slot (or stack of slots) and
restoring it after the autonomous transaction commits.
I suspect that there'll be way too much code that relies on stashing
xact-scoped stuff in globals for that to be viable. Caches alone.
Peter will be able to explain more, I'm sure.
We'd probably need a new transaction data object that everything
xact-scoped hangs off, so we can pass it everywhere or swap it out of
some global. The mechanical refactoring alone would be pretty scary,
not to mention the complexity of actually identifying all the less
obvious places that need changing.
Consider invalidation callbacks. They're always "fun", and so simple
to get right....
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 31/08/16 16:11, Craig Ringer wrote:
On 31 August 2016 at 21:46, Greg Stark <stark@mit.edu> wrote:
On Wed, Aug 31, 2016 at 2:50 AM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:- A API interface to open a "connection" to a background worker, run
queries, get results: AutonomousSessionStart(), AutonomousSessionEnd(),
AutonomousSessionExecute(), etc. The communication happens using the
client/server protocol.Peter, you mention "Oracle-style autonomous transaction blocks".
What are the semantics to be expected of those with regards to:
- Accessing objects exclusively locked by the outer xact or where the
requested lockmode conflicts with a lock held by the outer xact- Visibility of data written by the outer xact
That would be my question as well.
Also, is it intended (outside the plpgsql interface) that the
autonomous xact can proceed concurrently/interleaved with a local
backend xact? i.e. the local backend xact isn't suspended and you're
allowed to do things on the local backend as well? If so, what
handling do you have in mind for deadlocks between the local backend
xact and the bgworker with the autonomous xact? I'd expect the local
backend to always win, killing the autonomous xact every time.
I would expect that in PLs it's handled by them, if you misuse this on C
level that's your problem?
I'm surprised by the background worker. I had envisioned autonomous
transactions being implemented by allowing a process to reserve a
second entry in PGPROC with the same pid. Or perhaps save its existing
information in a separate pgproc slot (or stack of slots) and
restoring it after the autonomous transaction commits.I suspect that there'll be way too much code that relies on stashing
xact-scoped stuff in globals for that to be viable. Caches alone.
Peter will be able to explain more, I'm sure.
I can also see some advantages in bgworker approach. For example it
could be used for "fire and forget" type of interface in the future,
where you return as soon as you send exec and don't care about waiting
for result.
--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 31 August 2016 at 14:09, Joel Jacobson <joel@trustly.com> wrote:
On Wed, Aug 31, 2016 at 6:41 AM, Jaime Casanova
<jaime.casanova@2ndquadrant.com> wrote:On 30 August 2016 at 23:10, Joel Jacobson <joel@trustly.com> wrote:
There should be a way to within the session and/or txn permanently
block autonomous transactions.This will defeat one of the use cases of autonomous transactions: auditing
My idea on how to deal with this would be to mark the function to be
"AUTONOMOUS" similar to how a function is marked to be "PARALLEL
SAFE",
and to throw an error if a caller that has blocked autonomous
transactions tries to call a function that is marked to be autonomous.That way none of the code that needs to be audited would ever get executed.
Not sure I see why you would want to turn off execution for only some functions.
What happens if your function calls some other function with
side-effects? How would you roll that back? How would you mark
functions for the general case?
Functions with side effects can't be tested with simple unit tests;
that has nothing to do with autonomous transactions.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Aug 31, 2016 at 3:11 PM, Craig Ringer <craig@2ndquadrant.com> wrote:
I suspect that there'll be way too much code that relies on stashing
xact-scoped stuff in globals for that to be viable. Caches alone.
Peter will be able to explain more, I'm sure.We'd probably need a new transaction data object that everything
xact-scoped hangs off, so we can pass it everywhere or swap it out of
some global. The mechanical refactoring alone would be pretty scary,
not to mention the complexity of actually identifying all the less
obvious places that need changing.
Well this is the converse of the same problem. Today process state and
transaction are tied together. One way or another you're trying to
split that -- either by having two processes share state or by having
one process manage two transactions.
I suppose we already have the infrastructure for parallel query so
there's at least some shared problem space there.
--
greg
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Aug 31, 2016, at 6:46 AM, Greg Stark <stark@mit.edu> wrote:
Using a background worker mean that the autonomous transaction can't
access any state from the process memory. Parameters in plpgsql are a
symptom of this but I suspect there will be others. What happens if a
statement timeout occurs during an autonomous transaction? What
happens if you use a pl language in the autonomous transaction and if
it tries to use non-transactional information such as prepared
statements?
+1 on this.
The proposed solution loosely matches what was done in DB2 9.7 and it runs into the same
complexity. Passing local variable or session level variables back and forth became a source of grief.
At SFDC PG we have taken a different tack:
1. Gather up all the transaction state that is scattered across global variables into a struct
2. backup/restore transaction state when an autonomous transaction is invoked.
This allows full access to all non-transactional state.
The downside is that full access also includes uncommitted DDL (shared recache).
So we had to restrict DDL in the parent transaction prior to the spawning of the child.
If there is interest in exploring this kind of solution as an alternative I can elaborate.
Cheers
Serge
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 08/31/2016 03:09 PM, Joel Jacobson wrote:
On Wed, Aug 31, 2016 at 6:41 AM, Jaime Casanova
<jaime.casanova@2ndquadrant.com> wrote:On 30 August 2016 at 23:10, Joel Jacobson <joel@trustly.com> wrote:
There should be a way to within the session and/or txn permanently
block autonomous transactions.This will defeat one of the use cases of autonomous transactions: auditing
My idea on how to deal with this would be to mark the function to be
"AUTONOMOUS" similar to how a function is marked to be "PARALLEL
SAFE",
and to throw an error if a caller that has blocked autonomous
transactions tries to call a function that is marked to be autonomous.That way none of the code that needs to be audited would ever get executed.
Part of what people want this for is to audit what people *try* to do.
We can already audit what they've actually done.
With your solution, we still wouldn't know when an unauthorized attempt
to do something happened.
--
Vik Fearing +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, 31 Aug 2016 14:46:30 +0100
Greg Stark <stark@mit.edu> wrote:
On Wed, Aug 31, 2016 at 2:50 AM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:- A API interface to open a "connection" to a background worker, run
queries, get results: AutonomousSessionStart(),
AutonomousSessionEnd(), AutonomousSessionExecute(), etc. The
communication happens using the client/server protocol.I'm surprised by the background worker. I had envisioned autonomous
transactions being implemented by allowing a process to reserve a
second entry in PGPROC with the same pid. Or perhaps save its existing
information in a separate pgproc slot (or stack of slots) and
restoring it after the autonomous transaction commits.Using a background worker mean that the autonomous transaction can't
access any state from the process memory. Parameters in plpgsql are a
symptom of this but I suspect there will be others. What happens if a
statement timeout occurs during an autonomous transaction? What
happens if you use a pl language in the autonomous transaction and if
it tries to use non-transactional information such as prepared
statements?
I am trying to implement autonomous transactions that way. I
have already implemented suspending and restoring the parent
transaction state, at least some of it. The next thing on
the plan is the procarray/snapshot stuff. I think we should
reuse the same PGPROC for the autonomous transaction, and
allocate a stack of PGXACTs for the case of nested
autonomous transactions.
Solving the more general problem, running multiple
concurrent transactions with a single backend, may also be
interesting for some users. Autonomous transactions would
then be just a use case for that feature.
Regards,
Constantin Pan
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Sep 1, 2016 at 12:12 AM, Vik Fearing <vik@2ndquadrant.fr> wrote:
Part of what people want this for is to audit what people *try* to do.
We can already audit what they've actually done.With your solution, we still wouldn't know when an unauthorized attempt
to do something happened.
The unauthorized attempt to execute the function will still be logged
to the PostgreSQL log file
since it would throw an error, just like trying to connect with e.g.
an invalid username would be logged to the log files.
I think that's enough for that use-case, since it's arguably not an
application layer problem,
since no part of the code was ever executed.
But if someone tries to execute a function where one of the input params
is a password and the function raises an exception if the password
is incorrect and wants to log the unauthorized attempt, then that
would be a good example of when you could use and would need to use
autonomous transactions to log the invalid password attempt.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Aug 31, 2016 at 6:22 PM, Simon Riggs <simon@2ndquadrant.com> wrote:
On 31 August 2016 at 14:09, Joel Jacobson <joel@trustly.com> wrote:
My idea on how to deal with this would be to mark the function to be
"AUTONOMOUS" similar to how a function is marked to be "PARALLEL
SAFE",
and to throw an error if a caller that has blocked autonomous
transactions tries to call a function that is marked to be autonomous.That way none of the code that needs to be audited would ever get executed.
Not sure I see why you would want to turn off execution for only some functions.
What happens if your function calls some other function with
side-effects?
I'm not sure I understand your questions. All volatile functions modifying data
have side-effects. What I meant was if they are allowed to commit it
even if the caller doesn't want to.
However, I'll try to clarify the two scenarios I envision:
1. If a function is declared AUTONOMOUS and it gets called,
then that means nothing in the txn has blocked autonomous yet
and the function and any other function will be able to do autonomous txns
from that here on, so if some function would try to block autonomous that
would throw an error.
2. If a function has blocked autonomous and something later on
tries to call a function declared AUTONOMOUS then that would throw an error.
Basically, we start with a NULL state where autonomous is neither blocked
or explicitly allowed. Whatever happens first decides if autonomous transactions
will explicitly be blocked or allowed during the txn.
So we can go from NULL -> AUTONOMOUS ALLOWED
or NULL -> AUTONOMOUS BLOCKED,
but that's the only two state transitions possible.
Once set, it cannot be changed.
If nothing in an application cares about autonomous transactions,
they don't have to do anything special, they don't need to modify any
of their code.
But if it for some reason is important to block autonomous transactions
because the application is written in a way where it is expected
a RAISE EXCEPTION always rollbacks everything,
then the author of such an application (e.g. me) can just block
autonomous transactions
and continue to live happily ever after without having to dream nightmares about
developers misusing the feature, and only use it when appropriate.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-08-31 06:10:31 +0200, Joel Jacobson wrote:
This is important if you as a caller function want to be sure none of
the work made by anything called down the stack gets committed.
That is, if you as a caller decide to rollback, e.g. by raising an
exception, and you want to be sure *everything* gets rollbacked,
including all work by functions you've called.
If the caller can't control this, then the author of the caller
function would need to inspect the source code of all function being
called, to be sure there are no code using autonomous transactions.
I'm not convinced this makes much sense. All FDWs, dblink etc. already
allow you do stuff outside of a transaction.
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Sep 1, 2016 at 8:38 PM, Andres Freund <andres@anarazel.de> wrote:
I'm not convinced this makes much sense. All FDWs, dblink etc. already
allow you do stuff outside of a transaction.
You as a DBA can prevent FDWs from being used and dblink is an
extension that you don't have to install.
So if preventing side-effects is necessary in your application, that
can be achieved by simply not installing dblink and preventing FDWs.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Aug 31, 2016 at 7:20 AM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:
I would like to propose the attached patch implementing autonomous
transactions for discussion and review.
I'm pretty skeptical of this approach. Like Greg Stark, Serge Rielau,
and Constantin Pan, I had expected that autonomous transactions should
be implemented inside of a single backend, without relying on workers.
That approach would be much less likely to run afoul of limits on the
number of background workers, and it will probably perform
considerably better too, especially when the autonomous transaction
does only a small amount of work, like inserting a log message
someplace. That is not to say that providing an interface to some
pg_background-like functionality is a bad idea; there's been enough
interest in that from various quarters to suggest that it's actually
quite useful, and I don't even think that it's a bad plan to integrate
that with the PLs in some way. However, I think that it's a different
feature than autonomous transactions. As others have also noted, it
can be used to fire-and-forget a command, or to run a command while
foreground processing continues, both of which would be out of scope
for an autonomous transaction facility per se. So my suggestion is
that you pursue the work but change the name.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 9/2/16 3:45 AM, Robert Haas wrote:
So my suggestion is
that you pursue the work but change the name.
That might make the plpgsql issues significantly easier to deal with as
well, by making it very explicit that you're doing something with a
completely separate connection. That would make requiring special
handling for passing plpgsql variables to a query much less confusing.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532) mobile: 512-569-9461
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers