Commands per transaction
Is there any way of bumping this limit or am I stuck breaking up the
transaction?
ERROR: cannot have more than 2^32-1 commands in a transaction
--
Rod Taylor wrote:
Is there any way of bumping this limit or am I stuck breaking up the
transaction?
Wow, I never heard of anyone reaching the limit :-( Sorry, you are
stuck (short of changing CommandId to 64 bits, which would bloat your
tables considerably ...)
--
Alvaro Herrera Developer, http://www.PostgreSQL.org
"[PostgreSQL] is a great group; in my opinion it is THE best open source
development communities in existence anywhere." (Lamar Owen)
On Sat, 2006-01-21 at 12:48 -0300, Alvaro Herrera wrote:
Rod Taylor wrote:
Is there any way of bumping this limit or am I stuck breaking up the
transaction?Wow, I never heard of anyone reaching the limit :-( Sorry, you are
stuck (short of changing CommandId to 64 bits, which would bloat your
tables considerably ...)
It was a complex plpgsql function iterating through a large volume of
data. It would appear that each IF statement in plpgsql is taken as a
separate statement because I don't believe I was dealing with more than
about 1.2B rows in a FOR ... EXECUTE 'SELECT ...' loop.
--
On Sat, 21 Jan 2006, Alvaro Herrera wrote:
Rod Taylor wrote:
Is there any way of bumping this limit or am I stuck breaking up the
transaction?Wow, I never heard of anyone reaching the limit :-( Sorry, you are
stuck (short of changing CommandId to 64 bits, which would bloat your
tables considerably ...)
Would it be possible to increment the command counter only for commands
that do updates?
- Heikki
On Sat, 2006-01-21 at 12:48 -0300, Alvaro Herrera wrote:
Rod Taylor wrote:
Is there any way of bumping this limit or am I stuck breaking up the
transaction?Wow, I never heard of anyone reaching the limit :-( Sorry, you are
stuck (short of changing CommandId to 64 bits, which would bloat your
tables considerably ...)
Does a subtransaction receive an independent command counter? If so, I
could shove a bunch of work into large subtransactions allowing me to
avoid bumping the main transactions command counter.
Heck, PostgreSQL could automatically create a subtransaction (savepoint)
with no rollback name. The client sending a commit or rollback always
operates on the parent transaction and the child follows (committing or
rolling back when required).
--
Heikki Linnakangas <hlinnaka@iki.fi> writes:
Would it be possible to increment the command counter only for commands
that do updates?
Probably that would work, but I'm unsure whether we have infrastructure
that would let you detect that reliably...
regards, tom lane
Rod Taylor <pg@rbt.ca> writes:
Does a subtransaction receive an independent command counter?
No such luck, and IIRC that would not be easy at all to change :-(
regards, tom lane
Rod Taylor schrieb:
On Sat, 2006-01-21 at 12:48 -0300, Alvaro Herrera wrote:
Rod Taylor wrote:
Is there any way of bumping this limit or am I stuck breaking up the
transaction?Wow, I never heard of anyone reaching the limit :-( Sorry, you are
stuck (short of changing CommandId to 64 bits, which would bloat your
tables considerably ...)
...
As a quick fix (as is quickly fixed, quick running ;)
You could load your update data to a temp table via
COPY and then update a large table in one sweep.
I guess you dont have 2^31 tables to update? ;)
On Sat, 2006-01-21 at 22:30 +0100, Tino Wildenhain wrote:
Rod Taylor schrieb:
On Sat, 2006-01-21 at 12:48 -0300, Alvaro Herrera wrote:
Rod Taylor wrote:
Is there any way of bumping this limit or am I stuck breaking up the
transaction?Wow, I never heard of anyone reaching the limit :-( Sorry, you are
stuck (short of changing CommandId to 64 bits, which would bloat your
tables considerably ...)
As a quick fix (as is quickly fixed, quick running ;)
You could load your update data to a temp table via
COPY and then update a large table in one sweep.
I guess you dont have 2^31 tables to update? ;)
I wouldn't have written a plpgsql function if I was doing the same thing
with every tuple. Running multiple statements across the 40GB heap would
take much longer to complete.
I've resorted to a client side script that COPYs the data out of various
partitions in the main structure, applies the logic, then COPYs it back
in again to a new partition. Once complete drop the original partition
so the datasets swap.
This is somewhat slower than my original method of doing it all on the
server side but not by too much since the reduced CPU load seems to
offset the increased network IO.
--