"could not split GIN page; no new items fit"
Hmm, I'm trying to create a gin index, thusly:
create index foo_idx on foo using gin(entry gin_trgm_ops);
and I'm getting the error "could not split GIN page; no new items fit"
Any idea what this means, or how I can get around it? The table in
question has about 23MM rows, if that makes any difference. The only
reference that search engines returned was the source code.
select version()
PostgreSQL 9.4.1 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu
4.9.1-16ubuntu6) 4.9.1, 64-bit
-Chris
--
If money can fix it, it's not a problem. - Tom Magliozzi
Chris Curvey <chris@chriscurvey.com> writes:
Hmm, I'm trying to create a gin index, thusly:
create index foo_idx on foo using gin(entry gin_trgm_ops);
and I'm getting the error "could not split GIN page; no new items fit"
Any idea what this means, or how I can get around it?
Looks to me like a bug (ie, the code seems to think this is a can't-happen
case). Don't suppose you could supply sample data that triggers this?
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Fri, Apr 3, 2015 at 9:27 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Chris Curvey <chris@chriscurvey.com> writes:
Hmm, I'm trying to create a gin index, thusly:
create index foo_idx on foo using gin(entry gin_trgm_ops);and I'm getting the error "could not split GIN page; no new items fit"
Any idea what this means, or how I can get around it?
Looks to me like a bug (ie, the code seems to think this is a can't-happen
case). Don't suppose you could supply sample data that triggers this?regards, tom lane
I can! I just copied the data to a new table, obfuscated the sensitive
parts, and was able to reproduce the error. I can supply the script to
create and populate the table, but that's still clocking in at 250Mb after
being zipped. What's the best way of getting this data out to someone who
can take a look at this? (Feel free to contact me off-list to coordinate.)
-Chris
--
If money can fix it, it's not a problem. - Tom Magliozzi
On 4/4/15 8:38 AM, Chris Curvey wrote:
On Fri, Apr 3, 2015 at 9:27 PM, Tom Lane <tgl@sss.pgh.pa.us
<mailto:tgl@sss.pgh.pa.us>> wrote:Chris Curvey <chris@chriscurvey.com <mailto:chris@chriscurvey.com>>
writes:Hmm, I'm trying to create a gin index, thusly:
create index foo_idx on foo using gin(entry gin_trgm_ops);and I'm getting the error "could not split GIN page; no new items fit"
Any idea what this means, or how I can get around it?
Looks to me like a bug (ie, the code seems to think this is a
can't-happen
case). Don't suppose you could supply sample data that triggers this?regards, tom lane
I can! I just copied the data to a new table, obfuscated the sensitive
parts, and was able to reproduce the error. I can supply the script to
create and populate the table, but that's still clocking in at 250Mb
after being zipped. What's the best way of getting this data out to
someone who can take a look at this? (Feel free to contact me off-list
to coordinate.)
It would be nice if you could further reduce it, but if not I'd suggest
posting it to something like DropBox and posting the public link here.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Jim Nasby <Jim.Nasby@bluetreble.com> writes:
On 4/4/15 8:38 AM, Chris Curvey wrote:
I can! I just copied the data to a new table, obfuscated the sensitive
parts, and was able to reproduce the error. I can supply the script to
create and populate the table, but that's still clocking in at 250Mb
after being zipped. What's the best way of getting this data out to
someone who can take a look at this? (Feel free to contact me off-list
to coordinate.)
It would be nice if you could further reduce it, but if not I'd suggest
posting it to something like DropBox and posting the public link here.
So far I've been unable to reproduce the failure from Chris' data :-(
Don't know why not.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 4/7/15 11:58 PM, Tom Lane wrote:
Jim Nasby <Jim.Nasby@bluetreble.com> writes:
On 4/4/15 8:38 AM, Chris Curvey wrote:
I can! I just copied the data to a new table, obfuscated the sensitive
parts, and was able to reproduce the error. I can supply the script to
create and populate the table, but that's still clocking in at 250Mb
after being zipped. What's the best way of getting this data out to
someone who can take a look at this? (Feel free to contact me off-list
to coordinate.)It would be nice if you could further reduce it, but if not I'd suggest
posting it to something like DropBox and posting the public link here.So far I've been unable to reproduce the failure from Chris' data :-(
Don't know why not.
Could it be dependent on the order of the data in the heap?
I'm assuming the field being indexed isn't one of the one's Chris had to
obfuscate...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general