Bug with Tsearch and tsvector

Started by Donald Fraseralmost 16 years ago17 messagesbugs
Jump to latest
#1Donald Fraser
postgres@kiwi-fraser.net

PostgreSQL 8.3.10 (on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46))
OS: Linux Redhat EL 5.4
Database encoding: LATIN9

Using the default tsearch configuration, for 'english', text is being wrongly parsed into the tsvector type.
The fail condition is shown with the following example, using the ts_headline function to highlight the issue.

SELECT ts_headline('english', 'The annual financial report will shortly be posted on the Company’s web-site at
<span lang="EN-GB">http://www.harewoodsolutions.co.uk/press.aspx&lt;/span&gt;&lt;span lang="EN-GB" style=""></span><span style="">
and a further announcement will be made once the annual financial report is available to be downloaded. </span>',
to_tsquery(''), 'MaxWords=101, MinWords=100');

Output:
"The annual financial report will shortly be posted on the Company&#8217;s web-site at
http://www.harewoodsolutions.co.uk/press.aspx&lt;/span&gt;&lt;span lang="EN-GB" style="">
and a further announcement will be made once the annual financial report is available to be downloaded. "

Expected output:
"The annual financial report will shortly be posted on the Company&#8217;s web-site at
http://www.harewoodsolutions.co.uk/press.aspx
and a further announcement will be made once the annual financial report is available to be downloaded. "

Regards
Donald Fraser

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Donald Fraser (#1)
Re: Bug with Tsearch and tsvector

"Donald Fraser" <postgres@kiwi-fraser.net> writes:

Using the default tsearch configuration, for 'english', text is being wrongly parsed into the tsvector type.

ts_debug shows that it's being parsed like this:

alias | description | token | dictionaries | dictionary | lexemes
-----------------+---------------------------------+----------------------------------------+----------------+--------------+------------------------------------------
tag | XML tag | <span lang="EN-GB"> | {} | |
protocol | Protocol head | http:// | {} | |
url | URL | www.harewoodsolutions.co.uk/press.aspx | {simple} | simple | {www.harewoodsolutions.co.uk/press.aspx}
host | Host | www.harewoodsolutions.co.uk | {simple} | simple | {www.harewoodsolutions.co.uk}
url_path | URL path | /press.aspx</span><span | {simple} | simple | {/press.aspx</span><span}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | lang | {english_stem} | english_stem | {lang}
... etc ...

ie the critical point seems to be that url_path is willing to soak up a
string containing "<" and ">", so the span tags don't get recognized as
separate lexemes. While that's "obviously" the wrong thing in this
particular example, I'm not sure if it's the wrong thing in general.
Can anyone comment on the frequency of usage of those two symbols in
URLs?

In any case it's weird that the URL lexeme doesn't span the same text
as the url_path one, but I'm not sure which one we should consider
wrong.

regards, tom lane

#3Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#2)
Re: Bug with Tsearch and tsvector

Tom Lane <tgl@sss.pgh.pa.us> wrote:

ie the critical point seems to be that url_path is willing to soak
up a string containing "<" and ">", so the span tags don't get
recognized as separate lexemes. While that's "obviously" the
wrong thing in this particular example, I'm not sure if it's the
wrong thing in general. Can anyone comment on the frequency of
usage of those two symbols in URLs?

http://www.ietf.org/rfc/rfc2396.txt section 2.4.3 "delims" expressly
forbids their use in URIs.

In any case it's weird that the URL lexeme doesn't span the same
text as the url_path one, but I'm not sure which one we should
consider wrong.

In spite of the above prohibition, I notice that firefox and wget
both seem to *try* to use such characters if they're included.

-Kevin

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#3)
Re: Bug with Tsearch and tsvector

"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:

Tom Lane <tgl@sss.pgh.pa.us> wrote:

ie the critical point seems to be that url_path is willing to soak
up a string containing "<" and ">", so the span tags don't get
recognized as separate lexemes. While that's "obviously" the
wrong thing in this particular example, I'm not sure if it's the
wrong thing in general. Can anyone comment on the frequency of
usage of those two symbols in URLs?

http://www.ietf.org/rfc/rfc2396.txt section 2.4.3 "delims" expressly
forbids their use in URIs.
In spite of the above prohibition, I notice that firefox and wget
both seem to *try* to use such characters if they're included.

Hmm, thanks for the reference, but I'm not sure this is specifying
quite what we want to get at. In particular I note that it excludes
'%' on the grounds that that ought to be escaped, so I guess this is
specifying the characters allowed in an underlying URI, *not* the
textual representation of a URI.

Still, it seems like this is a sufficient defense against any complaints
we might get for not treating "<" or ">" as part of a URL.

I wonder whether we ought to reject any of the other characters listed
here too. Right now, the InURLPath state seems to eat everything until
a space, quote, or double quote mark. We could easily make it stop at
"<" or ">" too, but what else?

regards, tom lane

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#2)
Re: Bug with Tsearch and tsvector

I wrote:

"Donald Fraser" <postgres@kiwi-fraser.net> writes:

Using the default tsearch configuration, for 'english', text is being wrongly parsed into the tsvector type.

ts_debug shows that it's being parsed like this:

alias | description | token | dictionaries | dictionary | lexemes
-----------------+---------------------------------+----------------------------------------+----------------+--------------+------------------------------------------
tag | XML tag | <span lang="EN-GB"> | {} | |
protocol | Protocol head | http:// | {} | |
url | URL | www.harewoodsolutions.co.uk/press.aspx | {simple} | simple | {www.harewoodsolutions.co.uk/press.aspx}
host | Host | www.harewoodsolutions.co.uk | {simple} | simple | {www.harewoodsolutions.co.uk}
url_path | URL path | /press.aspx</span><span | {simple} | simple | {/press.aspx</span><span}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | lang | {english_stem} | english_stem | {lang}
... etc ...

ie the critical point seems to be that url_path is willing to soak up a
string containing "<" and ">", so the span tags don't get recognized as
separate lexemes. While that's "obviously" the wrong thing in this
particular example, I'm not sure if it's the wrong thing in general.
Can anyone comment on the frequency of usage of those two symbols in
URLs?

In any case it's weird that the URL lexeme doesn't span the same text
as the url_path one, but I'm not sure which one we should consider
wrong.

I poked at this a bit. The reason for the inconsistency between the url
and url_path lexemes is that the InURLPathStart state transitions
directly to InURLPath, which is *not* consistent with what happens while
parsing the URL as a whole: p_isURLPath() starts the sub-parser in
InFileFirst state. The attached proposed patch rectifies that by
transitioning to InFileFirst state instead. A possible objection to
this fix is that you may get either a "file" or a "url_path" component
lexeme, where before you always got "url_path". I'm not sure if that's
something to worry about or not; I'd tend to think there's nothing much
wrong with it.

The other change in the attached patch is to make InURLPath parsing
stop at "<" or ">", as per discussion.

With these changes I get

regression=# SELECT * from ts_debug('http://www.harewoodsolutions.co.uk/press.aspx&lt;/span&gt;&#39;);
alias | description | token | dictionaries | dictionary | lexemes
----------+-------------------+----------------------------------------+--------------+------------+------------------------------------------
protocol | Protocol head | http:// | {} | |
url | URL | www.harewoodsolutions.co.uk/press.aspx | {simple} | simple | {www.harewoodsolutions.co.uk/press.aspx}
host | Host | www.harewoodsolutions.co.uk | {simple} | simple | {www.harewoodsolutions.co.uk}
file | File or path name | /press.aspx | {simple} | simple | {/press.aspx}
tag | XML tag | </span> | {} | |
(5 rows)

as compared to the prior behavior

regression=# SELECT * from ts_debug('http://www.harewoodsolutions.co.uk/press.aspx&lt;/span&gt;&#39;);
alias | description | token | dictionaries | dictionary | lexemes
----------+---------------+----------------------------------------+--------------+------------+------------------------------------------
protocol | Protocol head | http:// | {} | |
url | URL | www.harewoodsolutions.co.uk/press.aspx | {simple} | simple | {www.harewoodsolutions.co.uk/press.aspx}
host | Host | www.harewoodsolutions.co.uk | {simple} | simple | {www.harewoodsolutions.co.uk}
url_path | URL path | /press.aspx</span> | {simple} | simple | {/press.aspx</span>}
(4 rows)

Neither change affects the current set of regression tests; but none the
less there's a potential compatibility issue here, so my thought is to
apply this only in HEAD.

Comments?

regards, tom lane

#6Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#4)
Re: Bug with Tsearch and tsvector

Tom Lane <tgl@sss.pgh.pa.us> wrote:

Hmm, thanks for the reference, but I'm not sure this is specifying
quite what we want to get at. In particular I note that it
excludes '%' on the grounds that that ought to be escaped, so I
guess this is specifying the characters allowed in an underlying
URI, *not* the textual representation of a URI.

I'm not sure I follow you here -- % is disallowed "raw" because it
is itself the escape character to allow hexadecimal specification of
any disallowed character. So, being the escape character itself, we
would need to allow it.

Section 2.4, taken as a whole, makes sense to me, and argues that we
should always treat any text representation of a URI (including a
URL) as being in escaped form. If it weren't for backward
compatibility, I would feel strongly that we should take any of the
excluded characters as the end of a URI.

| A URI is always in an "escaped" form, since escaping or unescaping
| a completed URI might change its semantics. Normally, the only
| time escape encodings can safely be made is when the URI is being
| created from its component parts; each component may have its own
| set of characters that are reserved, so only the mechanism
| responsible for generating or interpreting that component can
| determine whether or not escaping a character will change its
| semantics. Likewise, a URI must be separated into its components
| before the escaped characters within those components can be
| safely decoded.

Still, it seems like this is a sufficient defense against any
complaints we might get for not treating "<" or ">" as part of a
URL.

I would think so.

I wonder whether we ought to reject any of the other characters
listed here too. Right now, the InURLPath state seems to eat
everything until a space, quote, or double quote mark. We could
easily make it stop at "<" or ">" too, but what else?

From the RFC:

| control = <US-ASCII coded characters 00-1F and 7F hexadecimal>
| space = <US-ASCII coded character 20 hexadecimal>
| delims = "<" | ">" | "#" | "%" | <">
| unwise = "{" | "}" | "|" | "\" | "^" | "[" | "]" | "`"

Except, of course, that since % is the escape character, it is OK.

Hmm. Having typed that, I'm staring at the # character, which is
used to mark off an anchor within an HTML page identified by the
URL. Should we consider the # and anchor part of a URL? Any other
questionable characters?

-Kevin

#7Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#5)
Re: Bug with Tsearch and tsvector

Tom Lane <tgl@sss.pgh.pa.us> wrote:

there's a potential compatibility issue here, so my thought is to
apply this only in HEAD.

Agreed.

-Kevin

#8Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#6)
Re: Bug with Tsearch and tsvector

"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:

Hmm. Having typed that, I'm staring at the # character, which is
used to mark off an anchor within an HTML page identified by the
URL. Should we consider the # and anchor part of a URL?

Yeah, I would think so.

This discussion is making me think that my previous patch went in the
wrong direction. The way that the existing code works is that after
seeing something that looks host-name-ish followed by a '/', it goes to
the FilePath state, which is why "/press.aspx" gets parsed as a file
name in my previous example. It only goes to the URLPath state if,
while in FilePath state, it sees '?'. This seems a tad bizarre, and
it means that anything we do to the URLPath rules will only affect the
part of a URL following a '?'.

What I think might be the right thing instead, if we are going to
tighten up what URLPath accepts, is to go directly to URLPath state
after seeing host-name-and-'/'. This eliminates the problem of
sometimes reporting "file" where we would have said "url_path"
before, and gives us a chance to apply the URLPath rules uniformly
to all text following a hostname.

Attached is a patch that does it that way instead. We'd probably
not want to apply this as-is, but should first tighten up what
characters URLPath allows, per Kevin's spec research.

I find that this patch does create a couple of changes in the regression
test outputs. The reason is that it parses this case differently:
select * from ts_debug('http://5aew.werc.ewr:8100/?&#39;);
Existing code says that that is

alias | description | token | dictionaries | dictionary | lexemes
----------+---------------+--------------------+--------------+------------+----------------------
protocol | Protocol head | http:// | {} | |
host | Host | 5aew.werc.ewr:8100 | {simple} | simple | {5aew.werc.ewr:8100}
blank | Space symbols | /? | {} | |
(3 rows)

while with this patch we get

alias | description | token | dictionaries | dictionary | lexemes
----------+---------------+----------------------+--------------+------------+------------------------
protocol | Protocol head | http:// | {} | |
url | URL | 5aew.werc.ewr:8100/? | {simple} | simple | {5aew.werc.ewr:8100/?}
host | Host | 5aew.werc.ewr:8100 | {simple} | simple | {5aew.werc.ewr:8100}
url_path | URL path | /? | {simple} | simple | {/?}
(4 rows)

Offhand I see no reason to discriminate against "/?" as a URL path, so
this change seems fine to me, but it is a change.

Thoughts?

regards, tom lane

#9Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#8)
Re: Bug with Tsearch and tsvector

Tom Lane <tgl@sss.pgh.pa.us> wrote:

We'd probably not want to apply this as-is, but should first
tighten up what characters URLPath allows, per Kevin's spec
research.

If we're headed that way, I figured I should double-check. The RFC
I referenced was later obsoleted by:

http://www.ietf.org/rfc/rfc3986.txt

I haven't been able to find anything more recent. This newer RFC
doesn't seem to list characters which are *not* part of a URI, so
here are the characters which *are* allowed:

gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "@"

sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="

unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"

I'll read this RFC closely and follow up later today.

-Kevin

#10Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#9)
Re: Bug with Tsearch and tsvector

"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:

Tom Lane <tgl@sss.pgh.pa.us> wrote:

We'd probably not want to apply this as-is, but should first
tighten up what characters URLPath allows, per Kevin's spec
research.

If we're headed that way, I figured I should double-check. The RFC
I referenced was later obsoleted by:
http://www.ietf.org/rfc/rfc3986.txt

On reflection, since we're changing the behavior anyway, it seems
like the most defensible thing to do is make the TS parser follow
the RFC's allowed character set exactly. The newer RFC doesn't
restrict '#' so that possible corner case is gone.

regards, tom lane

#11Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#10)
Re: Bug with Tsearch and tsvector

Tom Lane <tgl@sss.pgh.pa.us> wrote:

"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:

Tom Lane <tgl@sss.pgh.pa.us> wrote:

We'd probably not want to apply this as-is, but should first
tighten up what characters URLPath allows, per Kevin's spec
research.

If we're headed that way, I figured I should double-check. The
RFC I referenced was later obsoleted by:
http://www.ietf.org/rfc/rfc3986.txt

On reflection, since we're changing the behavior anyway, it seems
like the most defensible thing to do is make the TS parser follow
the RFC's allowed character set exactly. The newer RFC doesn't
restrict '#' so that possible corner case is gone.

It seems worth mentioning that there is a BSD licensed URI parser on
sourceforge:

http://uriparser.sourceforge.net/

I'm not advocating for using it, I just ran across it and it seemed
of possible interest.

-Kevin

#12Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Kevin Grittner (#9)
Re: Bug with Tsearch and tsvector

"Kevin Grittner" <Kevin.Grittner@wicourts.gov> wrote:

I'll read this RFC closely and follow up later today.

For anyone not clear on what a URI is compared to a URL, every URL
is also a URI (but not the other way around):

A URI can be further classified as a locator, a name, or both.
The term "Uniform Resource Locator" (URL) refers to the subset of
URIs that, in addition to identifying a resource, provide a means
of locating the resource by describing its primary access
mechanism (e.g., its network "location").

So rules for URIs apply to URLs.

Regarding allowed characters, the relevant portions seem to be:

The URI syntax has been designed with global transcription as one
of its main considerations. A URI is a sequence of characters
from a very limited set: the letters of the basic Latin alphabet,
digits, and a few special characters.

The generic syntax uses the slash ("/"), question mark ("?"), and
number sign ("#") characters to delimit components that are
significant to the generic parser's hierarchical interpretation of
an identifier.

A URI is composed from a limited set of characters consisting of
digits, letters, and a few graphic symbols. A reserved subset of
those characters may be used to delimit syntax components within a
URI while the remaining characters, including both the unreserved
set and those reserved characters not acting as delimiters, define
each component's identifying data.

A percent-encoding mechanism is used to represent a data octet in
a component when that octet's corresponding character is outside
the allowed set or is being used as a delimiter of, or within, the
component. A percent-encoded octet is encoded as a character
triplet, consisting of the percent character "%" followed by the
two hexadecimal digits representing that octet's numeric value.
For example, "%20" is the percent-encoding for the binary octet
"00100000" (ABNF: %x20), which in US-ASCII corresponds to the
space character (SP). Section 2.4 describes when percent-encoding
and decoding is applied.

pct-encoded = "%" HEXDIG HEXDIG

The uppercase hexadecimal digits 'A' through 'F' are equivalent to
the lowercase digits 'a' through 'f', respectively. If two URIs
differ only in the case of hexadecimal digits used in percent-
encoded octets, they are equivalent. For consistency, URI
producers and normalizers should use uppercase hexadecimal digits
for all percent-encodings.

reserved = gen-delims / sub-delims

gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "@"

sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="

unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"

I think that we should accept all the above characters (reserved and
unreserved) and the percent character (since it is the escape
character) as part of a URL.

Certainly *not* back-patchable.

I don't know whether we should try to extract components of the URL,
but if we do, perhaps we should also adopt the standard names for
the components:

The generic URI syntax consists of a hierarchical sequence of
components referred to as the scheme, authority, path, query, and
fragment.

URI = scheme ":" hier-part [ "?" query ] [ "#" fragment ]

hier-part = "//" authority path-abempty
/ path-absolute
/ path-rootless
/ path-empty

The scheme and path components are required, though the path may
be empty (no characters). When authority is present, the path
must either be empty or begin with a slash ("/") character. When
authority is not present, the path cannot begin with two slash
characters ("//"). These restrictions result in five different
ABNF rules for a path (Section 3.3), only one of which will match
any given URI reference.

The following are two example URIs and their component parts:

foo://example.com:8042/over/there?name=ferret#nose
\_/ \______________/\_________/ \_________/ \__/
| | | | |
scheme authority path query fragment
| _____________________|__
/ \ / \
urn:example:animal:ferret:nose

I'm not really sure of the source for names we're now using.

Of course, the bigger the changes, the less they sound like material
for a quick, 11th hour 9.0 patch.

-Kevin

#13Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#12)
Re: Bug with Tsearch and tsvector

"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:

I think that we should accept all the above characters (reserved and
unreserved) and the percent character (since it is the escape
character) as part of a URL.

Check.

I don't know whether we should try to extract components of the URL,
but if we do, perhaps we should also adopt the standard names for
the components:

I don't think we should change the component names; that would break
existing text search configurations, and it doesn't seem to buy much
in return.

regards, tom lane

#14Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#12)
Re: Bug with Tsearch and tsvector

"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:

reserved = gen-delims / sub-delims
gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "@"
sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"

I think that we should accept all the above characters (reserved and
unreserved) and the percent character (since it is the escape
character) as part of a URL.

I've applied the attached patch to make it work that way.

regards, tom lane

#15Jasen Betts
jasen@xnet.co.nz
In reply to: Donald Fraser (#1)
Re: Bug with Tsearch and tsvector

On 2010-04-26, Kevin Grittner <Kevin.Grittner@wicourts.gov> wrote:

Tom Lane <tgl@sss.pgh.pa.us> wrote:

From the RFC:

| control = <US-ASCII coded characters 00-1F and 7F hexadecimal>
| space = <US-ASCII coded character 20 hexadecimal>
| delims = "<" | ">" | "#" | "%" | <">
| unwise = "{" | "}" | "|" | "\" | "^" | "[" | "]" | "`"

Except, of course, that since % is the escape character, it is OK.

Hmm. Having typed that, I'm staring at the # character, which is
used to mark off an anchor within an HTML page identified by the
URL. Should we consider the # and anchor part of a URL? Any other
questionable characters?

\ is popular in URIs on some platfroms, or is URI a different beast

#16Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jasen Betts (#15)
Re: Bug with Tsearch and tsvector

Jasen Betts <jasen@xnet.co.nz> writes:

\ is popular in URIs on some platfroms, or is URI a different beast

I hope not, because \ is explicitly disallowed by both the older and
newer versions of that RFC.

I did think of proposing that we allow \ and : in FilePath, which is
currently pretty Unix-centric:

regression=# select * from ts_debug('/foo/bar.baz');
alias | description | token | dictionaries | dictionary | lexemes
-------+-------------------+--------------+--------------+------------+----------------
file | File or path name | /foo/bar.baz | {simple} | simple | {/foo/bar.baz}
(1 row)

regression=# select * from ts_debug(E'C:\\foo\\bar.baz');
alias | description | token | dictionaries | dictionary | lexemes
-----------+-----------------+---------+----------------+--------------+-----------
asciiword | Word, all ASCII | C | {english_stem} | english_stem | {c}
blank | Space symbols | :\ | {} | |
asciiword | Word, all ASCII | foo | {english_stem} | english_stem | {foo}
blank | Space symbols | \ | {} | |
host | Host | bar.baz | {simple} | simple | {bar.baz}
(5 rows)

But that's more or less orthogonal to what URLPath should allow.

regards, tom lane

#17Jasen Betts
jasen@xnet.co.nz
In reply to: Donald Fraser (#1)
Re: Bug with Tsearch and tsvector

On 2010-04-29, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Jasen Betts <jasen@xnet.co.nz> writes:

\ is popular in URIs on some platfroms, or is URI a different beast

I hope not, because \ is explicitly disallowed by both the older and
newer versions of that RFC.

I should have known better than to assume that Microsoft was using a
term correctly.