how does full text searching tokenize words ? can it be altered?
Started by Jonathan Vanascoover 11 years ago1 messagesgeneral
I'm getting a handful of 'can not index words longer than 2047 characters' on my `gin` indexes.
1. does this 2047 character count correspond to tokens / indexed words?
2. if so, is there a way to lower this number ?
3. is there a way to profile the index for the frequency of tokens ?
( apologies in advance if this looks familiar, i posted this as part of a larger question last month; everything but this was answered by the list and I can't find answers to this online )
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general