aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTom Lane <tgl@sss.pgh.pa.us>2022-08-31 10:42:05 -0400
committerTom Lane <tgl@sss.pgh.pa.us>2022-08-31 10:42:05 -0400
commite969f1ae2b01d7b69371332b839fa16e3b54e56d (patch)
treecd54fd32aea653f59b9ef29841efe925ccc36fee
parent464db46760d2a89e1933038330f1d84210115886 (diff)
downloadpostgresql-e969f1ae2b01d7b69371332b839fa16e3b54e56d.tar.gz
postgresql-e969f1ae2b01d7b69371332b839fa16e3b54e56d.zip
In the Snowball dictionary, don't try to stem excessively-long words.
If the input word exceeds 1000 bytes, don't pass it to the stemmer; just return it as-is after case folding. Such an input is surely not a word in any human language, so whatever the stemmer might do to it would be pretty dubious in the first place. Adding this restriction protects us against a known recursion-to-stack-overflow problem in the Turkish stemmer, and it seems like good insurance against any other safety or performance issues that may exist in the Snowball stemmers. (I note, for example, that they contain no CHECK_FOR_INTERRUPTS calls, so we really don't want them running for a long time.) The threshold of 1000 bytes is arbitrary. An alternative definition could have been to treat such words as stopwords, but that seems like a bigger break from the old behavior. Per report from Egor Chindyaskin and Alexander Lakhin. Thanks to Olly Betts for the recommendation to fix it this way. Discussion: https://postgr.es/m/1661334672.728714027@f473.i.mail.ru
-rw-r--r--src/backend/snowball/dict_snowball.c18
1 files changed, 17 insertions, 1 deletions
diff --git a/src/backend/snowball/dict_snowball.c b/src/backend/snowball/dict_snowball.c
index 8c25f3ebbf2..11624145d65 100644
--- a/src/backend/snowball/dict_snowball.c
+++ b/src/backend/snowball/dict_snowball.c
@@ -275,8 +275,24 @@ dsnowball_lexize(PG_FUNCTION_ARGS)
char *txt = lowerstr_with_len(in, len);
TSLexeme *res = palloc0(sizeof(TSLexeme) * 2);
- if (*txt == '\0' || searchstoplist(&(d->stoplist), txt))
+ /*
+ * Do not pass strings exceeding 1000 bytes to the stemmer, as they're
+ * surely not words in any human language. This restriction avoids
+ * wasting cycles on stuff like base64-encoded data, and it protects us
+ * against possible inefficiency or misbehavior in the stemmer. (For
+ * example, the Turkish stemmer has an indefinite recursion, so it can
+ * crash on long-enough strings.) However, Snowball dictionaries are
+ * defined to recognize all strings, so we can't reject the string as an
+ * unknown word.
+ */
+ if (len > 1000)
+ {
+ /* return the lexeme lowercased, but otherwise unmodified */
+ res->lexeme = txt;
+ }
+ else if (*txt == '\0' || searchstoplist(&(d->stoplist), txt))
{
+ /* empty or stopword, so report as stopword */
pfree(txt);
}
else