INSERT ON CONFLICT of "wide" table: target lists can have at most 1664 entries

Started by Justin Pryzbyover 9 years ago3 messagesgeneral
Jump to latest
#1Justin Pryzby
pryzby@telsasoft.com

Our application INSERTs data from external sources, and infrequently UPDATEs
the previously-inserted data (currently, it first SELECTs to determine whether
to UPDATE).

I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
running into a problem when the table has >830 columns (we have some tables
which are at the 1600 column limit, and have previously worked around that
limit using arrays or multiple tables).

I tried to work around the upsert problem by using pygresql inline=True
(instead of default PREPAREd statements) but both have the same issue.

I created a test script which demonstrates the problem (attached).

It seems to me that there's currently no way to "upsert" such a wide table?

I see:
./src/include/access/htup_details.h:#define MaxTupleAttributeNumber 1664 /* 8 * 208 */

./src/backend/parser/parse_node.c- /*
./src/backend/parser/parse_node.c- * Check that we did not produce too many resnos; at the very least we
./src/backend/parser/parse_node.c- * cannot allow more than 2^16, since that would exceed the range of a
./src/backend/parser/parse_node.c: * AttrNumber. It seems safest to use MaxTupleAttributeNumber.
./src/backend/parser/parse_node.c- */
./src/backend/parser/parse_node.c: if (pstate->p_next_resno - 1 > MaxTupleAttributeNumber)
./src/backend/parser/parse_node.c- ereport(ERROR,
./src/backend/parser/parse_node.c- (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
./src/backend/parser/parse_node.c: errmsg("target lists can have at most %d entries",
./src/backend/parser/parse_node.c: MaxTupleAttributeNumber)));

Thanks in advance for any suggestions.

Justin

Attachments:

wide_upsert.txttext/plain; charset=us-asciiDownload
#2Rob Sargent
robjsargent@gmail.com
In reply to: Justin Pryzby (#1)
Re: INSERT ON CONFLICT of "wide" table: target lists can have at most 1664 entries

On Dec 4, 2016, at 9:32 AM, Justin Pryzby <pryzby@telsasoft.com> wrote:

Our application INSERTs data from external sources, and infrequently UPDATEs
the previously-inserted data (currently, it first SELECTs to determine whether
to UPDATE).

I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
running into a problem when the table has >830 columns (we have some tables
which are at the 1600 column limit, and have previously worked around that
limit using arrays or multiple tables).

I tried to work around the upsert problem by using pygresql inline=True
(instead of default PREPAREd statements) but both have the same issue.

I created a test script which demonstrates the problem (attached).

It seems to me that there's currently no way to "upsert" such a wide table?

Pardon my intrusion here, but I'm really curious what sort of datum has so many attributes?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: Justin Pryzby (#1)
Re: INSERT ON CONFLICT of "wide" table: target lists can have at most 1664 entries

Justin Pryzby <pryzby@telsasoft.com> writes:

I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
running into a problem when the table has >830 columns (we have some tables
which are at the 1600 column limit, and have previously worked around that
limit using arrays or multiple tables).

Yeah, this is a bug, but fortunately the fix is pretty trivial. See

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=da05d0ebc637a84ba41a172b32552557ebad199f

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general