[Fwd: Re: Transactions and temp tables]
Hi all,
I just saw that this new patch was not considered because the previous
version ended being rejected.
Note that this version of the patch aims at supporting ONLY temp tables
that are created AND dropped in the same transaction. We need to be able
to use temp tables in transactions that are doing 2PC, but the temp
table lifespan does not need to cross transaction boundaries.
Please let me know if this patch could be integrated in 8.4.
Thanks,
Emmanuel
-------- Original Message --------
Subject: Re: [HACKERS] Transactions and temp tables
Date: Mon, 01 Dec 2008 22:00:48 -0500
From: Emmanuel Cecchet <manu@frogthinker.org>
Organization: Frog Thinker
To: Heikki Linnakangas <heikki.linnakangas@enterprisedb.com>
CC: Tom Lane <tgl@sss.pgh.pa.us>, pgsql-hackers@postgresql.org
References: <48EA7B12.6050906@frogthinker.org>
<48EB0551.50205@enterprisedb.com> <48EB95B7.10604@frogthinker.org>
<48EC7EB5.5020001@enterprisedb.com> <48ECAA8F.3070306@frogthinker.org>
<28137.1223472191@sss.pgh.pa.us> <48ECBBC2.6060003@frogthinker.org>
<29132.1223475486@sss.pgh.pa.us> <48EFA13C.2060407@frogthinker.org>
<490F804A.60108@enterprisedb.com> <490FB4B3.908@frogthinker.org>
<49106B10.2090407@enterprisedb.com> <4910DFC0.3080301@frogthinker.org>
<4914CD14.2060608@frogthinker.org> <4921C42A.3030600@enterprisedb.com>
<492221A3.1060608@frogthinker.org> <4922A798.7050709@enterprisedb.com>
<49232993.1050801@enterprisedb.com> <49243376.2000404@frogthinker.org>
<492543D5.9050904@enterprisedb.com> <492B377F.4080602@frogthinker.org>
<49344E22.2000008@frogthinker.org>
Heikki,
I have extended the patch to allow temp tables that have been
created/dropped within the same transaction (and also on commit drop).
There is a problem with temp tables with on delete rows that are created
inside a transaction.
Take the 2pc_on_delete_rows_transaction.sql test case and change the
creation statement, instead of
create temp table foo(x int) on commit delete rows;
try
create temp table foo(x serial primary key) on commit delete rows;
The test will fail. It looks like the onCommit field is not properly
updated when serial or primary key is used in that context. I did not
figure out why.
Waiting for your feedback
Emmanuel
Emmanuel Cecchet wrote:
I think that the Assert in is_temp_rel(Oid) in tablecmds.c should be
replaced by if (on_commits == NULL) return false;
As the use case below shows, a regular table can be created and hold a
LOCKTAG_RELATION lock that will trigger the call to is_temp_rel in
is_preparable_locktag. The assert will break if no temp table was
accessed.As we were also trying to list potential issues, if the temp table
uses a SERIAL type, will there be potentially a problem with the
sequence at prepare time?Emmanuel
The following test fails with your patch on my system. Could you
check if you can reproduce?psql (8.4devel)
Type "help" for help.test=# begin;
BEGIN
test=# create table paul(x int);
CREATE TABLE
test=# insert into paul values(1);
INSERT 0 1
test=# prepare transaction 'persistentTableShouldSucceed';
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.---
LOG: database system is ready to accept connections
TRAP: FailedAssertion("!(on_commits != ((void *)0))", File:
"tablecmds.c", Line: 7823)
LOG: server process (PID 15969) was terminated by signal 6: Aborted
LOG: terminating any other active server processes
FATAL: the database system is in recovery modeThanks,
manu
--
Emmanuel Cecchet
FTO @ Frog Thinker
Open Source Development & Consulting
--
Web: http://www.frogthinker.org
email: manu@frogthinker.org
Skype: emmanuel_cecchet
--
Emmanuel Cecchet
Aster Data Systems - Frontline data warehousing
Web: http://www.asterdata.com
Attachments:
Emmanuel Cecchet wrote:
I just saw that this new patch was not considered because the previous
version ended being rejected.
Note that this version of the patch aims at supporting ONLY temp tables
that are created AND dropped in the same transaction. We need to be able
to use temp tables in transactions that are doing 2PC, but the temp
table lifespan does not need to cross transaction boundaries.Please let me know if this patch could be integrated in 8.4.
IMHO, this is just getting too kludgey. We came up with pretty good
ideas on how to handle temp tables properly, by treating the same as
non-temp tables. That should eliminate all the problems the latest patch
did, and also the issues with sequences, and allow all access to temp
tables, not just a limited subset. I don't think it's worthwhile to
apply the kludge as a stopgap measure, let's do it properly in 8.5.
As a workaround, you can use a regular table instead of a temporary one.
If you create and drop the regular table in the same transaction (that's
the same limitation that latest patch has), you won't end up with a
bogus table in your database if the connection is dropped unexpectedly.
If your application uses multiple connections simultaenously, you'll
need a little bit of code in the application so that you don't try to
create a table with the same name in all backends. You could also create
a different schema for each connection, and do "set
search_path='semitempschemaX, public'", so that you can use the same
table name and still have separate tables for each connections.
(sorry for the late reply)
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Hi Heikki,
The point of using temp tables was performance. Using regular tables in
our case would hurt performance too much. Well if we cannot get a
temporary fix in 8.4, we will maintain a separate patch to get that
functionality just for temp tables that are created and dropped in the
same transaction. Hopefully we will be able to come up with a working
solution in 8.5.
Thanks for your help,
Emmanuel
Emmanuel Cecchet wrote:
I just saw that this new patch was not considered because the previous
version ended being rejected.
Note that this version of the patch aims at supporting ONLY temp tables
that are created AND dropped in the same transaction. We need to be able
to use temp tables in transactions that are doing 2PC, but the temp
table lifespan does not need to cross transaction boundaries.Please let me know if this patch could be integrated in 8.4.
IMHO, this is just getting too kludgey. We came up with pretty good
ideas on how to handle temp tables properly, by treating the same as
non-temp tables. That should eliminate all the problems the latest patch
did, and also the issues with sequences, and allow all access to temp
tables, not just a limited subset. I don't think it's worthwhile to
apply the kludge as a stopgap measure, let's do it properly in 8.5.As a workaround, you can use a regular table instead of a temporary one.
If you create and drop the regular table in the same transaction (that's
the same limitation that latest patch has), you won't end up with a
bogus table in your database if the connection is dropped unexpectedly.
If your application uses multiple connections simultaenously, you'll
need a little bit of code in the application so that you don't try to
create a table with the same name in all backends. You could also create
a different schema for each connection, and do "set
search_path='semitempschemaX, public'", so that you can use the same
table name and still have separate tables for each connections.(sorry for the late reply)
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Emmanuel Cecchet
Aster Data Systems
Web: http://www.asterdata.com
Heikki Linnakangas wrote:
Emmanuel Cecchet wrote:
I just saw that this new patch was not considered because the previous
version ended being rejected.
Note that this version of the patch aims at supporting ONLY temp tables
that are created AND dropped in the same transaction. We need to be able
to use temp tables in transactions that are doing 2PC, but the temp
table lifespan does not need to cross transaction boundaries.Please let me know if this patch could be integrated in 8.4.
IMHO, this is just getting too kludgey. We came up with pretty good
ideas on how to handle temp tables properly, by treating the same as
non-temp tables. That should eliminate all the problems the latest patch
did, and also the issues with sequences, and allow all access to temp
tables, not just a limited subset. I don't think it's worthwhile to
apply the kludge as a stopgap measure, let's do it properly in 8.5.As a workaround, you can use a regular table instead of a temporary one.
If you create and drop the regular table in the same transaction (that's
the same limitation that latest patch has), you won't end up with a
bogus table in your database if the connection is dropped unexpectedly.
If your application uses multiple connections simultaenously, you'll
need a little bit of code in the application so that you don't try to
create a table with the same name in all backends. You could also create
a different schema for each connection, and do "set
search_path='semitempschemaX, public'", so that you can use the same
table name and still have separate tables for each connections.
Can someone tell me how this should be worded as a TODO item?
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +
Bruce Momjian wrote:
Heikki Linnakangas wrote:
IMHO, this is just getting too kludgey. We came up with pretty good
ideas on how to handle temp tables properly, by treating the same as
non-temp tables. That should eliminate all the problems the latest patch
did, and also the issues with sequences, and allow all access to temp
tables, not just a limited subset. I don't think it's worthwhile to
apply the kludge as a stopgap measure, let's do it properly in 8.5.
...Can someone tell me how this should be worded as a TODO item?
There already is a todo item about this:
"Allow prepared transactions with temporary tables created and dropped
in the same transaction, and when an ON COMMIT DELETE ROWS temporary
table is accessed "
I added a link to the email describing the most recent idea on how this
should be implemented.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com