Flexible configuration for full-text search

Started by Aleksandr Parfenovabout 8 years ago38 messages
#1Aleksandr Parfenov
a.parfenov@postgrespro.ru
1 attachment(s)

Hello hackers,

Arthur Zakirov and I are working on a patch to introduce more flexible
way to configure full-text search in PostgreSQL because current syntax
doesn't allow a variety of scenarios to be handled. Additionally, some
parts contain the implicit logic of the processing, such as filtering
dictionaries with TSL_FILTER flag, so configuration partially moved to
dictionary itself and in most of the cases hardcoded into dictionary.
One more drawback of current FTS configuration is that we can't divide
the dictionary selection and output producing, so we can't configure
FTS to use one dictionary if another one recognized a token (e.g. use
hunspell if dictionary of nouns recognized a token).

Basically, the key goal of the patch is to provide user more
control on processing of the text.

The patch introduces way to configure FTS based on CASE/WHEN/THEN/ELSE
construction. Current comma-separated list also available to meet
compatibility. The basic form of new syntax is following:

ALTER TEXT SEARCH CONFIGURATION <fts_conf>
ALTER MAPPING FOR <token_types> WITH
CASE
WHEN <condition> THEN <command>
....
[ ELSE <command> ]
END;

A condition is a logical expression on dictionaries. You can specify how
to interpret dictionary output with
dictionary IS [ NOT ] NULL - for NULL-result
dictionary IS [ NOT ] STOPWORD - for empty (stopword) result

If interpretation marker is not given it is interpreted as:
dictionary IS NOT NULL AND dictionary IS NOT STOPWORD

A command is an expression on dictionaries output sets with operators
UNION, EXCEPT and INTERSECT. Additionally, there is a special operator
MAP BY which allow us to create the same behavior as with filtering
dictionaries. MAP BY operator get output of the right subexpression and
send it to left subexpression as an input token (if there are more than
one lexeme each one is sent separately).

There is a few example of usage of new configuration and comparison
with solutions using current syntax.

1) Multilingual search. Can be used for FTS on a set of documents in
different languages (example for German and English languages).
ALTER TEXT SEARCH CONFIGURATION multi
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
word, hword, hword_part WITH CASE
WHEN english_hunspell AND german_hunspell THEN
english_hunspell UNION german_hunspell
WHEN english_hunspell THEN english_hunspell
WHEN german_hunspell THEN german_hunspell
ELSE german_stem UNION english_stem
END;

With old configuration we should use separate vector and index for each
required language and query should combine result of search for each
language:
SELECT * FROM en_de_documents WHERE
to_tsvector('english', text) @@ to_tsquery('english', 'query')
OR
to_tsvector('german', text) @@ to_tsquery('german', 'query');

The new multilingual search configuration itself looks more complex but
allow to avoid a split of index and vectors. Additionally, for
similar languages or configurations with simple or *_stem dictionaries
in the list we can reduce total size of index since in
current-state example index for English configuration also will
keep data about documents written in German and vice-versa.

2) Combination of exact search with morphological one. This patch not
fully solve the problem but it is a step toward solution. Currently, we
should split exact and morphological search in query manually and use
separate index for each part. With new way to configure FTS we can use
following configuration:
ALTER TEXT SEARCH CONFIGURATION exact_and_morph
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
word, hword, hword_part WITH CASE
WHEN english_hunspell THEN english_hunspell UNION simple
ELSE english_stem UNION simple
END;

Some of the queries like "'looking' <1> through" where 'looking' is
search for exact form of the word doesn't work in current-state FTS
since we can guarantee that document contains both 'looking' and
through, but can't be sure with distance between them.

Unfortunately, we can't fully support such queries with current format
of tsvector because after processing we can't distinguish is a word
was mentioned in normal form in text or was processed by some
dictionary. This leads to false positive hits if user searches for
the normal form of the word. I think we should provide a user ability
to mark dictionary something like "exact form producer". But without
tsvector modification this mark is useless since we can't mark output
of this dictionary in tsvector.

There is a patch on commitfest which removes 1MB limit on tsvector [1]Remove 1MB size limit in tsvector https://commitfest.postgresql.org/15/1221/.
There are few free bits available in each lexeme in vector, so
one of the bits may be used for "exact" flag.

3) Using different dictionaries for recognizing and output generation.
As I mentioned before, in new syntax condition and command are separate
and we can use it for some more complex text processing. Here an
example for processing only nouns:
ALTER TEXT SEARCH CONFIGURATION nouns_only
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
word, hword, hword_part WITH CASE
WHEN english_noun THEN english_hunspell
END;

This behavior couldn't be reached with the current state of FTS.

4) Special stopword processing allows us to discard stopwords even if
the main dictionary doesn't support such feature (in example pl_ispell
dictionary keeps stopwords in text):
ALTER TEXT SEARCH CONFIGURATION pl_without_stops
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
word, hword, hword_part WITH CASE
WHEN simple_pl IS NOT STOPWORD THEN pl_ispell
END;

The patch is in attachment. I'm will be glad to hear hackers' opinion
about it.

There are several cases discussed in hackers earlier:

Check for stopwords using non-target dictionary.
/messages/by-id/4733B65A.9030707@students.mimuw.edu.pl

Support union of outputs of several dictionaries.
/messages/by-id/c6851b7e-da25-3d8e-a5df-022c395a11b4@postgrespro.ru

Support of chain of dictionaries using MAP BY operator.
/messages/by-id/46D57E6F.8020009@enterprisedb.com

[1]: Remove 1MB size limit in tsvector https://commitfest.postgresql.org/15/1221/
https://commitfest.postgresql.org/15/1221/

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

Attachments:

0001-flexible-fts-configuration.patchtext/x-patchDownload
diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml
index b44aac9..960a28a 100644
--- a/doc/src/sgml/ref/alter_tsconfig.sgml
+++ b/doc/src/sgml/ref/alter_tsconfig.sgml
@@ -22,8 +22,12 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_expression</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_expression</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING REPLACE <replaceable class="parameter">old_dictionary</replaceable> WITH <replaceable class="parameter">new_dictionary</replaceable>
@@ -89,6 +93,16 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
    </varlistentry>
 
    <varlistentry>
+    <term><replaceable class="parameter">dictionary_expression</replaceable></term>
+    <listitem>
+     <para>
+      The expression of dictionaries tree. The dctionary expression
+      is a list of condition/command pairs that define way to process text.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry>
     <term><replaceable class="parameter">old_dictionary</replaceable></term>
     <listitem>
      <para>
@@ -133,7 +147,7 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
      </para>
     </listitem>
    </varlistentry>
- </variablelist>
+  </variablelist>
 
   <para>
    The <literal>ADD MAPPING FOR</literal> form installs a list of dictionaries to be
@@ -155,6 +169,64 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
  </refsect1>
 
  <refsect1>
+  <title>Dictionaries expression</title>
+
+  <refsect2>
+   <title>Format</title>
+   <programlisting>
+    CASE
+      WHEN <replaceable class="parameter">condition</replaceable> THEN <replaceable class="parameter">command</replaceable>
+      [ WHEN <replaceable class="parameter">condition</replaceable> THEN <replaceable class="parameter">command</replaceable> ]
+      [ ELSE <replaceable class="parameter">command</replaceable> ]
+    END
+   </programlisting>
+   <para>
+    A condition is
+   </para>
+
+   <programlisting>
+    dictionary_name [IS [NOT] {NULL|STOPWORD}] [ {AND|OR} ... ]
+    or
+    (dictionary_name MAP BY dictionary_name) IS [NOT] {NULL|STOPWORD} [ {AND|OR} ... ]
+   </programlisting>
+
+   <para>
+    And command is:
+   </para>
+
+   <programlisting>
+    dictionary_name [ {UNION|INTERSECT|EXCEPT|MAP BY} ... ]
+   </programlisting>
+  </refsect2>
+
+  <refsect2>
+   <title>Condition</title>
+   <para>
+    Condition used to determine a command for token processing. A condition is
+    boolean expression. A dictionary can be tested for <literal>NULL</>-output
+    or stop-word output via options <literal>IS [NOT] {NULL|STOPWORD}</>. If none
+    of test options is mentied (<literal>dictionary_name</> without additional
+    options) it is tested for both not <literal>NULL</> and not stop word output.
+   </para>
+  </refsect2>
+
+  <refsect2>
+   <title>Command</title>
+   <para>
+    A command describes how <productname>PostgreSQL</productname> should build
+    a result set for current token. Output of each dictionary is set of lexemes.
+    Result of dictionaries can be combined with help of operators
+    <literal>UNION</>, <literal>EXCEPT</>, <literal>INTERSECT</> and a special
+    operator <literal> MAP BY</>. <literal>MAP BY</> operator uses output of
+    right subexpression as an input for left subexpression. If right subexpression
+    output is <literal>NULL</>, initial token is used instead. If the output contains
+    multiple lexemes, each lexeme used as token for left subexpression
+    independently and final results is combined.
+   </para>
+  </refsect2>
+ </refsect1>
+
+ <refsect1>
   <title>Examples</title>
 
   <para>
diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml
index 7b4912d..f9d8ecd 100644
--- a/doc/src/sgml/textsearch.sgml
+++ b/doc/src/sgml/textsearch.sgml
@@ -732,10 +732,11 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     The <function>to_tsvector</function> function internally calls a parser
     which breaks the document text into tokens and assigns a type to
     each token.  For each token, a list of
-    dictionaries (<xref linkend="textsearch-dictionaries">) is consulted,
-    where the list can vary depending on the token type.  The first dictionary
-    that <firstterm>recognizes</firstterm> the token emits one or more normalized
-    <firstterm>lexemes</firstterm> to represent the token.  For example,
+    condition/command pairs is consulted, where the list can vary depending
+    on the token type, condition and command are logical and set expressions
+    on dictionaries(<xref linkend="textsearch-dictionaries">) respectively.
+    The first pair with true-resulted condition emits one or more normalized
+    <firstterm>lexemes</firstterm> to represent the token based on command.  For example,
     <literal>rats</literal> became <literal>rat</literal> because one of the
     dictionaries recognized that the word <literal>rats</literal> is a plural
     form of <literal>rat</literal>.  Some words are recognized as
@@ -743,7 +744,7 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     causes them to be ignored since they occur too frequently to be useful in
     searching.  In our example these are
     <literal>a</literal>, <literal>on</literal>, and <literal>it</literal>.
-    If no dictionary in the list recognizes the token then it is also ignored.
+    If none of conditions is <literal>true</literal> the token then is also ignored.
     In this example that happened to the punctuation sign <literal>-</literal>
     because there are in fact no dictionaries assigned for its token type
     (<literal>Space symbols</literal>), meaning space tokens will never be
@@ -2232,7 +2233,9 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
       a single lexeme with the <literal>TSL_FILTER</literal> flag set, to replace
       the original token with a new token to be passed to subsequent
       dictionaries (a dictionary that does this is called a
-      <firstterm>filtering dictionary</firstterm>)
+      <firstterm>filtering dictionary</firstterm>). This behavior is applicable only
+      with comma-separated configuration
+      (see <xref linkend="SQL-ALTERTSCONFIG"> for more information)
      </para>
     </listitem>
     <listitem>
@@ -2264,38 +2267,85 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
    type that the parser can return, a separate list of dictionaries is
    specified by the configuration.  When a token of that type is found
    by the parser, each dictionary in the list is consulted in turn,
-   until some dictionary recognizes it as a known word.  If it is identified
-   as a stop word, or if no dictionary recognizes the token, it will be
-   discarded and not indexed or searched for.
-   Normally, the first dictionary that returns a non-<literal>NULL</literal>
-   output determines the result, and any remaining dictionaries are not
-   consulted; but a filtering dictionary can replace the given word
-   with a modified word, which is then passed to subsequent dictionaries.
+   until command is not selected based on it's condition. If none of cases is
+   selected it will be discarded and not indexed or searched for.
   </para>
 
   <para>
-   The general rule for configuring a list of dictionaries
+   A list of cases is described as condition/command pairs. Each condition is
+   evaluated in order to select appropriate command to generate resulted set
+   of lexems.
+  </para>
+
+  <para>
+   A condition is a boolean expression with dictionaries used as operands and
+   basic logic operators <literal>AND</literal>, <literal>OR</literal>, <literal>NOT</literal> and
+   special operator <literal>MAP BY</literal>. In addition to operators, each operand
+   could contain <literal>IS [NOT] NULL</literal> or <literal>IS [NOT] STOPWORD</literal> option
+   to mark way to interpret lexemes as boolean value. If no options are mentioned
+   it is interpret as <literal>dictionary IS NOT NULL AND dictionary IS NOT STOPWORD</literal>.
+
+   Special operator <literal>MAP BY</literal> is used to use output of right-hand
+   subexpression as input for left-hand one. In condition left and right
+   subexpressions can be either another <literal>MAP BY</literal> expression or
+   dictionary expression. Result of <literal>MAP BY</literal> should be explicitly
+   makred for boolean interpretation.
+  </para>
+
+  <para>
+   A command is a set expression with dictionaries used as operands and basic
+   set operators <literal>UNION</literal>, <literal>EXCEPT</literal>, <literal>INTERSECT</literal>
+   and special operator <literal>MAP BY</literal>. The behavior of <literal>MAP BY</literal>
+   operator is similar to one in condition but without restrictions on content
+   of subexpressions since all operators operates on sets.
+  </para>
+
+  <para>
+   The general rule for configuring a list of condition/command pairs
    is to place first the most narrow, most specific dictionary, then the more
-   general dictionaries, finishing with a very general dictionary, like
+   general dictionaries, finishing with a very general dictionaries, like
    a <application>Snowball</application> stemmer or <literal>simple</literal>, which
-   recognizes everything.  For example, for an astronomy-specific search
+   recognizes everything. For example, for an astronomy-specific search
    (<literal>astro_en</literal> configuration) one could bind token type
    <type>asciiword</type> (ASCII word) to a synonym dictionary of astronomical
-   terms, a general English dictionary and a <application>Snowball</application> English
-   stemmer:
+   terms, a general English dictionary and a <application>Snowball</literal> English
+   stemmer via comma-separated variant of mapping:
 
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION astro_en
     ADD MAPPING FOR asciiword WITH astrosyn, english_ispell, english_stem;
 </programlisting>
+
+   Another example is a configuration for both english and german languages via
+   operator-separated variant of mapping:
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION multi_en_de
+    ADD MAPPING FOR asciiword, word WITH
+        CASE
+            WHEN english_ispell AND german_ispell THEN
+                 english_ispell UNION german_ispell
+            WHEN english_ispell THEN
+                 english_ispell UNION german_stem
+            WHEN german_ispell THEN
+                 german_ispell UNION english_stem
+            ELSE
+                 english_stem UNION german_stem
+        END;
+</programlisting>
+
   </para>
 
   <para>
-   A filtering dictionary can be placed anywhere in the list, except at the
-   end where it'd be useless.  Filtering dictionaries are useful to partially
+   A filtering dictionary can be placed anywhere in comma-separated list,
+   except at the end where it'd be useless.
+   Filtering dictionaries are useful to partially
    normalize words to simplify the task of later dictionaries.  For example,
    a filtering dictionary could be used to remove accents from accented
    letters, as is done by the <xref linkend="unaccent"> module.
+   Otherwise filter dictionary should be placed at righthand of <literal>MAP BY</literal>
+   operator. If filter dictionary returns <literal>NULL</literal> it pass initial token
+   further in processing chain.
   </para>
 
   <sect2 id="textsearch-stopwords">
@@ -2462,9 +2512,9 @@ SELECT ts_lexize('public.simple_dict','The');
 
 <screen>
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari}
+   alias   |   description   | token | dictionaries |   command    | lexemes 
+-----------+-----------------+-------+--------------+--------------+---------
+ asciiword | Word, all ASCII | Paris | english_stem | english_stem | {pari}
 
 CREATE TEXT SEARCH DICTIONARY my_synonym (
     TEMPLATE = synonym,
@@ -2476,9 +2526,9 @@ ALTER TEXT SEARCH CONFIGURATION english
     WITH my_synonym, english_stem;
 
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |       dictionaries        | dictionary | lexemes 
------------+-----------------+-------+---------------------------+------------+---------
- asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris}
+   alias   |   description   | token |      dictionaries       |  command   | lexemes 
+-----------+-----------------+-------+-------------------------+------------+---------
+ asciiword | Word, all ASCII | Paris | my_synonym,english_stem | my_synonym | {paris}
 </screen>
    </para>
 
@@ -3107,6 +3157,20 @@ CREATE TEXT SEARCH DICTIONARY english_ispell (
 ALTER TEXT SEARCH CONFIGURATION pg
     ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
                       word, hword, hword_part
+    WITH 
+      CASE
+        WHEN pg_dict IS NOT NULL THEN pg_dict
+        WHEN english_ispell THEN english_ispell
+        ELSE english_stem
+      END;
+</programlisting>
+
+    Or use alternative comma-separated syntax:
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION pg
+    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
+                      word, hword, hword_part
     WITH pg_dict, english_ispell, english_stem;
 </programlisting>
 
@@ -3177,13 +3241,13 @@ SHOW default_text_search_config;
   </indexterm>
 
 <synopsis>
-ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type>, </optional> <replaceable class="parameter">document</replaceable> <type>text</type>,
-         OUT <replaceable class="parameter">alias</replaceable> <type>text</type>,
-         OUT <replaceable class="parameter">description</replaceable> <type>text</type>,
-         OUT <replaceable class="parameter">token</replaceable> <type>text</type>,
-         OUT <replaceable class="parameter">dictionaries</replaceable> <type>regdictionary[]</type>,
-         OUT <replaceable class="parameter">dictionary</replaceable> <type>regdictionary</type>,
-         OUT <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)
+ts_debug(<optional> <replaceable class="PARAMETER">config</replaceable> <type>regconfig</type>, </optional> <replaceable class="PARAMETER">document</replaceable> <type>text</type>,
+         OUT <replaceable class="PARAMETER">alias</replaceable> <type>text</type>,
+         OUT <replaceable class="PARAMETER">description</replaceable> <type>text</type>,
+         OUT <replaceable class="PARAMETER">token</replaceable> <type>text</type>,
+         OUT <replaceable class="PARAMETER">dictionaries</replaceable> <type>text</type>,
+         OUT <replaceable class="PARAMETER">command</replaceable> <type>text</type>,
+         OUT <replaceable class="PARAMETER">lexemes</replaceable> <type>text[]</type>)
          returns setof record
 </synopsis>
 
@@ -3220,20 +3284,20 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionaries</replaceable> <type>regdictionary[]</type> &mdash; the
-       dictionaries selected by the configuration for this token type
+       <replaceable>dictionaries</replaceable> <type>text</type> &mdash; the
+       dictionaries defined by the configuration for this token type
       </para>
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionary</replaceable> <type>regdictionary</type> &mdash; the dictionary
-       that recognized the token, or <literal>NULL</literal> if none did
+       <replaceable>command</replaceable> <type>text</type> &mdash; the command that describes
+       the way to generate output
       </para>
      </listitem>
      <listitem>
       <para>
        <replaceable>lexemes</replaceable> <type>text[]</type> &mdash; the lexeme(s) produced
-       by the dictionary that recognized the token, or <literal>NULL</literal> if
+       by the command selected acording conditions, or <literal>NULL</literal> if
        none did; an empty array (<literal>{}</literal>) means it was recognized as a
        stop word
       </para>
@@ -3246,32 +3310,32 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
 
 <screen>
 SELECT * FROM ts_debug('english','a fat  cat sat on a mat - it ate a fat rats');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | cat   | {english_stem} | english_stem | {cat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | sat   | {english_stem} | english_stem | {sat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | on    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | mat   | {english_stem} | english_stem | {mat}
- blank     | Space symbols   |       | {}             |              | 
- blank     | Space symbols   | -     | {}             |              | 
- asciiword | Word, all ASCII | it    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | ate   | {english_stem} | english_stem | {ate}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | rats  | {english_stem} | english_stem | {rat}
+   alias   |   description   | token | dictionaries |   command    | lexemes 
+-----------+-----------------+-------+--------------+--------------+---------
+ asciiword | Word, all ASCII | a     | english_stem | english_stem | {}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | fat   | english_stem | english_stem | {fat}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | cat   | english_stem | english_stem | {cat}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | sat   | english_stem | english_stem | {sat}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | on    | english_stem | english_stem | {}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | a     | english_stem | english_stem | {}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | mat   | english_stem | english_stem | {mat}
+ blank     | Space symbols   |       |              |              | 
+ blank     | Space symbols   | -     |              |              | 
+ asciiword | Word, all ASCII | it    | english_stem | english_stem | {}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | ate   | english_stem | english_stem | {ate}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | a     | english_stem | english_stem | {}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | fat   | english_stem | english_stem | {fat}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | rats  | english_stem | english_stem | {rat}
 </screen>
   </para>
 
@@ -3297,13 +3361,13 @@ ALTER TEXT SEARCH CONFIGURATION public.english
 
 <screen>
 SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
-   alias   |   description   |    token    |         dictionaries          |   dictionary   |   lexemes   
------------+-----------------+-------------+-------------------------------+----------------+-------------
- asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | english_ispell | {}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | english_ispell | {bright}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | english_stem   | {supernova}
+   alias   |   description   |    token    |        dictionaries         |    command     |   lexemes   
+-----------+-----------------+-------------+-----------------------------+----------------+-------------
+ asciiword | Word, all ASCII | The         | english_ispell,english_stem | english_ispell | {}
+ blank     | Space symbols   |             |                             |                | 
+ asciiword | Word, all ASCII | Brightest   | english_ispell,english_stem | english_ispell | {bright}
+ blank     | Space symbols   |             |                             |                | 
+ asciiword | Word, all ASCII | supernovaes | english_ispell,english_stem | english_stem   | {supernova}
 </screen>
 
   <para>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index dc40cde..74cab6b 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -944,55 +944,13 @@ GRANT SELECT (subdbid, subname, subowner, subenabled, subslotname, subpublicatio
 -- Tsearch debug function.  Defined here because it'd be pretty unwieldy
 -- to put it into pg_proc.h
 
-CREATE FUNCTION ts_debug(IN config regconfig, IN document text,
-    OUT alias text,
-    OUT description text,
-    OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
-    OUT lexemes text[])
-RETURNS SETOF record AS
-$$
-SELECT
-    tt.alias AS alias,
-    tt.description AS description,
-    parse.token AS token,
-    ARRAY ( SELECT m.mapdict::pg_catalog.regdictionary
-            FROM pg_catalog.pg_ts_config_map AS m
-            WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-            ORDER BY m.mapseqno )
-    AS dictionaries,
-    ( SELECT mapdict::pg_catalog.regdictionary
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS dictionary,
-    ( SELECT pg_catalog.ts_lexize(mapdict, parse.token)
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS lexemes
-FROM pg_catalog.ts_parse(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 ), $2
-    ) AS parse,
-     pg_catalog.ts_token_type(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 )
-    ) AS tt
-WHERE tt.tokid = parse.tokid
-$$
-LANGUAGE SQL STRICT STABLE PARALLEL SAFE;
-
-COMMENT ON FUNCTION ts_debug(regconfig,text) IS
-    'debug function for text search configuration';
 
 CREATE FUNCTION ts_debug(IN document text,
     OUT alias text,
     OUT description text,
     OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
+    OUT dictionaries text,
+    OUT dictionary text,
     OUT lexemes text[])
 RETURNS SETOF record AS
 $$
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index adc7cd6..a0f1650 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -39,9 +39,12 @@
 #include "nodes/makefuncs.h"
 #include "parser/parse_func.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_public.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/jsonb.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "utils/syscache.h"
@@ -52,6 +55,7 @@ static void MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 						 HeapTuple tup, Relation relMap);
 static void DropConfigurationMapping(AlterTSConfigurationStmt *stmt,
 						 HeapTuple tup, Relation relMap);
+static TSMapRuleList *ParseTSMapList(List *dictMapList);
 
 
 /* --------------------- TS Parser commands ------------------------ */
@@ -935,11 +939,21 @@ makeConfigurationDependencies(HeapTuple tuple, bool removeOld,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			TSMapRuleList *mapdicts = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			Oid		   *dictionaryOids = TSMapGetDictionariesList(mapdicts);
+			Oid		   *currentOid = dictionaryOids;
 
-			referenced.classId = TSDictionaryRelationId;
-			referenced.objectId = cfgmap->mapdict;
-			referenced.objectSubId = 0;
-			add_exact_object_address(&referenced, addrs);
+			while (*currentOid != InvalidOid)
+			{
+				referenced.classId = TSDictionaryRelationId;
+				referenced.objectId = *currentOid;
+				referenced.objectSubId = 0;
+				add_exact_object_address(&referenced, addrs);
+
+				currentOid++;
+			}
+			pfree(dictionaryOids);
+			pfree(mapdicts);
 		}
 
 		systable_endscan(scan);
@@ -1091,8 +1105,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			mapvalues[Anum_pg_ts_config_map_mapcfg - 1] = cfgOid;
 			mapvalues[Anum_pg_ts_config_map_maptokentype - 1] = cfgmap->maptokentype;
-			mapvalues[Anum_pg_ts_config_map_mapseqno - 1] = cfgmap->mapseqno;
-			mapvalues[Anum_pg_ts_config_map_mapdict - 1] = cfgmap->mapdict;
+			mapvalues[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(&cfgmap->mapdicts);
 
 			newmaptup = heap_form_tuple(mapRel->rd_att, mapvalues, mapnulls);
 
@@ -1195,7 +1208,7 @@ AlterTSConfiguration(AlterTSConfigurationStmt *stmt)
 	relMap = heap_open(TSConfigMapRelationId, RowExclusiveLock);
 
 	/* Add or drop mappings */
-	if (stmt->dicts)
+	if (stmt->dicts || stmt->dict_map)
 		MakeConfigurationMapping(stmt, tup, relMap);
 	else if (stmt->tokentype)
 		DropConfigurationMapping(stmt, tup, relMap);
@@ -1271,6 +1284,105 @@ getTokenTypes(Oid prsId, List *tokennames)
 	return res;
 }
 
+static TSMapExpression *
+ParseTSMapExpression(DictMapExprElem *head)
+{
+	TSMapExpression *result;
+
+	if (head == NULL)
+		return NULL;
+
+	result = palloc0(sizeof(TSMapExpression));
+
+	if (head->kind == DICT_MAP_OPERATOR)
+	{
+		result->left = ParseTSMapExpression(head->left);
+		result->right = ParseTSMapExpression(head->right);
+		result->operator = head->oper;
+		result->options = head->options;
+	}
+	else if (head->kind == DICT_MAP_CONST_TRUE)
+	{
+		result->left = result->right = NULL;
+		result->is_true = true;
+		result->options = result->operator = 0;
+	}
+	else						/* head->kind == DICT_MAP_OPERAND */
+	{
+		result->dictionary = get_ts_dict_oid(head->dictname, false);
+		result->options = head->options;
+	}
+
+	return result;
+}
+
+static TSMapRule
+ParseTSMapRule(DictMapElem *elem)
+{
+	TSMapRule	result;
+
+	memset(&result, 0, sizeof(result));
+
+	result.condition.expression = ParseTSMapExpression(elem->condition);
+	if (elem->commandmaps)
+	{
+		result.command.ruleList = ParseTSMapList(elem->commandmaps);
+		result.command.is_expression = false;
+		result.command.expression = NULL;
+	}
+	else
+	{
+		result.command.ruleList = NULL;
+		result.command.is_expression = true;
+		result.command.expression = ParseTSMapExpression(elem->command);
+	}
+
+	return result;
+}
+
+static TSMapRuleList *
+ParseTSMapList(List *dictMapList)
+{
+	int			i;
+	TSMapRuleList *result;
+	ListCell   *c;
+
+	if (list_length(dictMapList) == 1 && ((DictMapElem *) lfirst(dictMapList->head))->dictnames)
+	{
+		DictMapElem *elem = (DictMapElem *) lfirst(dictMapList->head);
+
+		result = palloc0(sizeof(TSMapRuleList));
+		result->count = list_length(elem->dictnames);
+		result->data = palloc0(sizeof(TSMapRule) * result->count);
+
+		i = 0;
+		foreach(c, elem->dictnames)
+		{
+			List	   *names = (List *) lfirst(c);
+
+			result->data[i].dictionary = get_ts_dict_oid(names, false);
+			i++;
+		}
+	}
+	else
+	{
+		result = palloc0(sizeof(TSMapRuleList));
+		result->count = list_length(dictMapList);
+		result->data = palloc0(sizeof(TSMapRule) * result->count);
+
+		i = 0;
+		foreach(c, dictMapList)
+		{
+			List	   *l = (List *) lfirst(c);
+
+			result->data[i] = ParseTSMapRule((DictMapElem *) l);
+			i++;
+		}
+	}
+
+	return result;
+}
+
 /*
  * ALTER TEXT SEARCH CONFIGURATION ADD/ALTER MAPPING
  */
@@ -1287,8 +1399,9 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	Oid			prsId;
 	int		   *tokens,
 				ntoken;
-	Oid		   *dictIds;
-	int			ndict;
+	Oid		   *dictIds = NULL;
+	int			ndict = 0;
+	TSMapRuleList *mapRules = NULL;
 	ListCell   *c;
 
 	prsId = ((Form_pg_ts_config) GETSTRUCT(tup))->cfgparser;
@@ -1327,17 +1440,23 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	/*
 	 * Convert list of dictionary names to array of dict OIDs
 	 */
-	ndict = list_length(stmt->dicts);
-	dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
-	i = 0;
-	foreach(c, stmt->dicts)
+	if (stmt->dicts)
 	{
-		List	   *names = (List *) lfirst(c);
+		ndict = list_length(stmt->dicts);
+		dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
+		i = 0;
+		foreach(c, stmt->dicts)
+		{
+			List	   *names = (List *) lfirst(c);
 
-		dictIds[i] = get_ts_dict_oid(names, false);
-		i++;
+			dictIds[i] = get_ts_dict_oid(names, false);
+			i++;
+		}
 	}
 
+	if (stmt->dict_map)
+		mapRules = ParseTSMapList(stmt->dict_map);
+
 	if (stmt->replace)
 	{
 		/*
@@ -1357,6 +1476,10 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			Datum		repl_val[Natts_pg_ts_config_map];
+			bool		repl_null[Natts_pg_ts_config_map];
+			bool		repl_repl[Natts_pg_ts_config_map];
+			HeapTuple	newtup;
 
 			/*
 			 * check if it's one of target token types
@@ -1380,25 +1503,21 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 			/*
 			 * replace dictionary if match
 			 */
-			if (cfgmap->mapdict == dictOld)
-			{
-				Datum		repl_val[Natts_pg_ts_config_map];
-				bool		repl_null[Natts_pg_ts_config_map];
-				bool		repl_repl[Natts_pg_ts_config_map];
-				HeapTuple	newtup;
-
-				memset(repl_val, 0, sizeof(repl_val));
-				memset(repl_null, false, sizeof(repl_null));
-				memset(repl_repl, false, sizeof(repl_repl));
-
-				repl_val[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictNew);
-				repl_repl[Anum_pg_ts_config_map_mapdict - 1] = true;
-
-				newtup = heap_modify_tuple(maptup,
-										   RelationGetDescr(relMap),
-										   repl_val, repl_null, repl_repl);
-				CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
-			}
+			mapRules = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			TSMapReplaceDictionary(mapRules, dictOld, dictNew);
+
+			memset(repl_val, 0, sizeof(repl_val));
+			memset(repl_null, false, sizeof(repl_null));
+			memset(repl_repl, false, sizeof(repl_repl));
+
+			repl_val[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(mapRules));
+			repl_repl[Anum_pg_ts_config_map_mapdicts - 1] = true;
+
+			newtup = heap_modify_tuple(maptup,
+									   RelationGetDescr(relMap),
+									   repl_val, repl_null, repl_repl);
+			CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
+			pfree(mapRules);
 		}
 
 		systable_endscan(scan);
@@ -1408,24 +1527,21 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		/*
 		 * Insertion of new entries
 		 */
+
 		for (i = 0; i < ntoken; i++)
 		{
-			for (j = 0; j < ndict; j++)
-			{
-				Datum		values[Natts_pg_ts_config_map];
-				bool		nulls[Natts_pg_ts_config_map];
+			Datum		values[Natts_pg_ts_config_map];
+			bool		nulls[Natts_pg_ts_config_map];
 
-				memset(nulls, false, sizeof(nulls));
-				values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
-				values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
-				values[Anum_pg_ts_config_map_mapseqno - 1] = Int32GetDatum(j + 1);
-				values[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictIds[j]);
+			memset(nulls, false, sizeof(nulls));
+			values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
+			values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
+			values[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(mapRules));
 
-				tup = heap_form_tuple(relMap->rd_att, values, nulls);
-				CatalogTupleInsert(relMap, tup);
+			tup = heap_form_tuple(relMap->rd_att, values, nulls);
+			CatalogTupleInsert(relMap, tup);
 
-				heap_freetuple(tup);
-			}
+			heap_freetuple(tup);
 		}
 	}
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index c1a83ca..476e8da 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4371,6 +4371,32 @@ _copyReassignOwnedStmt(const ReassignOwnedStmt *from)
 	return newnode;
 }
 
+static DictMapExprElem *
+_copyDictMapExprElem(const DictMapExprElem *from)
+{
+	DictMapExprElem *newnode = makeNode(DictMapExprElem);
+
+	COPY_NODE_FIELD(dictname);
+	COPY_NODE_FIELD(left);
+	COPY_NODE_FIELD(right);
+	COPY_SCALAR_FIELD(kind);
+	COPY_SCALAR_FIELD(oper);
+	COPY_SCALAR_FIELD(options);
+
+	return newnode;
+}
+
+static DictMapElem *
+_copyDictMapElem(const DictMapElem *from)
+{
+	DictMapElem *newnode = makeNode(DictMapElem);
+
+	COPY_NODE_FIELD(condition);
+	COPY_NODE_FIELD(command);
+
+	return newnode;
+}
+
 static AlterTSDictionaryStmt *
 _copyAlterTSDictionaryStmt(const AlterTSDictionaryStmt *from)
 {
@@ -5373,6 +5399,12 @@ copyObjectImpl(const void *from)
 		case T_ReassignOwnedStmt:
 			retval = _copyReassignOwnedStmt(from);
 			break;
+		case T_DictMapExprElem:
+			retval = _copyDictMapExprElem(from);
+			break;
+		case T_DictMapElem:
+			retval = _copyDictMapElem(from);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _copyAlterTSDictionaryStmt(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 7a70001..4434566 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2177,6 +2177,28 @@ _equalReassignOwnedStmt(const ReassignOwnedStmt *a, const ReassignOwnedStmt *b)
 }
 
 static bool
+_equalDictMapExprElem(const DictMapExprElem *a, const DictMapExprElem *b)
+{
+	COMPARE_NODE_FIELD(dictname);
+	COMPARE_NODE_FIELD(left);
+	COMPARE_NODE_FIELD(right);
+	COMPARE_SCALAR_FIELD(kind);
+	COMPARE_SCALAR_FIELD(oper);
+	COMPARE_SCALAR_FIELD(options);
+
+	return true;
+}
+
+static bool
+_equalDictMapElem(const DictMapElem *a, const DictMapElem *b)
+{
+	COMPARE_NODE_FIELD(condition);
+	COMPARE_NODE_FIELD(command);
+
+	return true;
+}
+
+static bool
 _equalAlterTSDictionaryStmt(const AlterTSDictionaryStmt *a, const AlterTSDictionaryStmt *b)
 {
 	COMPARE_NODE_FIELD(dictname);
@@ -3517,6 +3539,12 @@ equal(const void *a, const void *b)
 		case T_ReassignOwnedStmt:
 			retval = _equalReassignOwnedStmt(a, b);
 			break;
+		case T_DictMapExprElem:
+			retval = _equalDictMapExprElem(a, b);
+			break;
+		case T_DictMapElem:
+			retval = _equalDictMapElem(a, b);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _equalAlterTSDictionaryStmt(a, b);
 			break;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 4c83a63..6a14890 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -52,6 +52,7 @@
 #include "catalog/namespace.h"
 #include "catalog/pg_am.h"
 #include "catalog/pg_trigger.h"
+#include "catalog/pg_ts_config_map.h"
 #include "commands/defrem.h"
 #include "commands/trigger.h"
 #include "nodes/makefuncs.h"
@@ -241,6 +242,8 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionSpec		*partspec;
 	PartitionBoundSpec	*partboundspec;
 	RoleSpec			*rolespec;
+	DictMapExprElem		*dmapexpr;
+	DictMapElem			*dmap;
 }
 
 %type <node>	stmt schema_stmt
@@ -396,8 +399,9 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				relation_expr_list dostmt_opt_list
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
-				publication_name_list
 				vacuum_relation_list opt_vacuum_relation_list
+				publication_name_list dictionary_map_list dictionary_map
+				dictionary_map_case
 
 %type <list>	group_by_list
 %type <node>	group_by_item empty_grouping_set rollup_clause cube_clause
@@ -581,6 +585,15 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>		partbound_datum PartitionRangeDatum
 %type <list>		partbound_datum_list range_datum_list
 
+%type <ival>		dictionary_map_clause_expr_dict_not dictionary_map_clause_expr_dict_flag
+%type <dmapexpr>	dictionary_map_clause dictionary_map_clause_expr_not
+					dictionary_map_command dictionary_map_command_expr_paren
+					dictionary_map_dict dictionary_map_clause_expr_or
+					dictionary_map_clause_expr_and dictionary_map_clause_expr_mapby_ext
+					dictionary_map_clause_expr_mapby
+					dictionary_map_clause_expr_paren dictionary_map_clause_expr_dict
+%type <dmap>		dictionary_map_else dictionary_map_element
+
 /*
  * Non-keyword token types.  These are hard-wired into the "flex" lexer.
  * They must be listed first so that their numeric codes do not depend on
@@ -648,7 +661,8 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	LEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL
 	LOCALTIME LOCALTIMESTAMP LOCATION LOCK_P LOCKED LOGGED
 
-	MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE MONTH_P MOVE
+	MAP MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE
+	MONTH_P MOVE
 
 	NAME_P NAMES NATIONAL NATURAL NCHAR NEW NEXT NO NONE
 	NOT NOTHING NOTIFY NOTNULL NOWAIT NULL_P NULLIF
@@ -671,7 +685,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SAVEPOINT SCHEMA SCHEMAS SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES
 	SERIALIZABLE SERVER SESSION SESSION_USER SET SETS SETOF SHARE SHOW
 	SIMILAR SIMPLE SKIP SMALLINT SNAPSHOT SOME SQL_P STABLE STANDALONE_P
-	START STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P STRIP_P
+	START STATEMENT STATISTICS STDIN STDOUT STOPWORD STORAGE STRICT_P STRIP_P
 	SUBSCRIPTION SUBSTRING SYMMETRIC SYSID SYSTEM_P
 
 	TABLE TABLES TABLESAMPLE TABLESPACE TEMP TEMPLATE TEMPORARY TEXT_P THEN
@@ -10005,24 +10019,26 @@ AlterTSDictionaryStmt:
 		;
 
 AlterTSConfigurationStmt:
-			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with any_name_list
+			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with dictionary_map
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ADD_MAPPING;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = false;
 					n->replace = false;
 					$$ = (Node*)n;
 				}
-			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with any_name_list
+			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with dictionary_map
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ALTER_MAPPING_FOR_TOKEN;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = true;
 					n->replace = false;
 					$$ = (Node*)n;
@@ -10074,6 +10090,272 @@ any_with:	WITH									{}
 			| WITH_LA								{}
 		;
 
+dictionary_map:
+			dictionary_map_case { $$ = $1; }
+			| any_name_list
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->condition = NULL;
+				n->command = NULL;
+				n->commandmaps = NULL;
+				n->dictnames = $1;
+				$$ = list_make1(n);
+			}
+		;
+
+dictionary_map_case:
+			CASE dictionary_map_list END_P
+			{
+				$$ = $2;
+			}
+			| CASE dictionary_map_list dictionary_map_else END_P
+			{
+				$$ = lappend($2, $3);
+			}
+		;
+
+dictionary_map_list:
+			dictionary_map_element							{ $$ = list_make1($1); }
+			| dictionary_map_list dictionary_map_element	{ $$ = lappend($1, $2); }
+		;
+
+dictionary_map_else:
+			ELSE dictionary_map_command
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->command = $2;
+				n->commandmaps = NULL;
+				n->dictnames = NULL;
+
+				n->condition = makeNode(DictMapExprElem);
+				n->condition->kind = DICT_MAP_CONST_TRUE;
+				n->condition->oper = 0;
+				n->condition->options = 0;
+				n->condition->left = NULL;
+				n->condition->right = NULL;
+
+				$$ = n;
+			}
+			| ELSE dictionary_map_case
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->command = NULL;
+				n->commandmaps = $2;
+				n->dictnames = NULL;
+
+				n->condition = makeNode(DictMapExprElem);
+				n->condition->kind = DICT_MAP_CONST_TRUE;
+				n->condition->oper = 0;
+				n->condition->options = 0;
+				n->condition->left = NULL;
+				n->condition->right = NULL;
+
+				$$ = n;
+			}
+		;
+
+dictionary_map_element:
+			WHEN dictionary_map_clause THEN dictionary_map_command
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->condition = $2;
+				n->command = $4;
+				n->commandmaps = NULL;
+				n->dictnames = NULL;
+				$$ = n;
+			}
+			| WHEN dictionary_map_clause THEN dictionary_map_case
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->condition = $2;
+				n->command = NULL;
+				n->commandmaps = $4;
+				n->dictnames = NULL;
+				$$ = n;
+			}
+		;
+
+dictionary_map_clause:
+			dictionary_map_clause_expr_or { $$ = $1; }
+		;
+
+dictionary_map_clause_expr_or:
+			dictionary_map_clause_expr_and OR dictionary_map_clause_expr_or
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_OR;
+				n->options = 0;
+				n->left = $1;
+				n->right = $3;
+				$$ = n;
+			}
+			| dictionary_map_clause_expr_and { $$ = $1; }
+		;
+
+dictionary_map_clause_expr_and:
+			dictionary_map_clause_expr_not AND dictionary_map_clause_expr_and
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_AND;
+				n->options = 0;
+				n->left = $1;
+				n->right = $3;
+				$$ = n;
+			}
+			| dictionary_map_clause_expr_not { $$ = $1; }
+		;
+
+dictionary_map_clause_expr_mapby_ext:
+			dictionary_map_clause_expr_dict MAP BY dictionary_map_clause_expr_mapby_ext
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_MAPBY;
+				n->options = 0;
+				n->left = $1;
+				n->right = $4;
+				$$ = n;
+			}
+			| dictionary_map_clause_expr_dict { $$ = $1; }
+		;
+
+dictionary_map_clause_expr_mapby:
+			dictionary_map_clause_expr_dict MAP BY dictionary_map_clause_expr_mapby_ext
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_MAPBY;
+				n->options = 0;
+				n->left = $1;
+				n->right = $4;
+				$$ = n;
+			}
+		;
+
+dictionary_map_clause_expr_not:
+			NOT dictionary_map_clause_expr_not
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_NOT;
+				n->options = 0;
+				n->left = NULL;
+				n->right = $2;
+				$$ = n;
+			}
+			| dictionary_map_clause_expr_paren { $$ = $1; }
+		;
+
+dictionary_map_clause_expr_paren:
+			'(' dictionary_map_clause_expr_or ')'	{ $$ = $2; }
+			| '(' dictionary_map_clause_expr_mapby ')' IS dictionary_map_clause_expr_dict_not dictionary_map_clause_expr_dict_flag
+			{
+				$$ = $2;
+				$$->options = $5 | $6;
+			}
+			| '(' dictionary_map_clause_expr_mapby ')'
+			{
+				$$ = $2;
+				$$->options = DICTMAP_OPT_NOT | DICTMAP_OPT_IS_NULL | DICTMAP_OPT_IS_STOP;
+			}
+			| dictionary_map_clause_expr_dict		{ $$ = $1; }
+		;
+
+dictionary_map_clause_expr_dict:
+			any_name
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERAND;
+				n->dictname = $1;
+				n->oper = 0;
+				n->options = DICTMAP_OPT_NOT | DICTMAP_OPT_IS_NULL | DICTMAP_OPT_IS_STOP;
+				n->left = n->right = NULL;
+				$$ = n;
+			}
+			| any_name IS dictionary_map_clause_expr_dict_not dictionary_map_clause_expr_dict_flag
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERAND;
+				n->dictname = $1;
+				n->oper = 0;
+				n->options = $3 | $4;
+				n->left = n->right = NULL;
+				$$ = n;
+			}
+		;
+
+dictionary_map_clause_expr_dict_not:
+			NOT				{ $$ = DICTMAP_OPT_NOT; }
+			| /* EMPTY */	{ $$ = 0; }
+		;
+
+dictionary_map_clause_expr_dict_flag:
+			NULL_P			{ $$ = DICTMAP_OPT_IS_NULL; }
+			| STOPWORD		{ $$ = DICTMAP_OPT_IS_STOP; }
+		;
+
+dictionary_map_command:
+			dictionary_map_command_expr_paren { $$ = $1; }
+			| dictionary_map_command_expr_paren UNION dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_UNION;
+				n->options = 0;
+				n->left = $1;
+				n->right = $3;
+				$$ = n;
+			}
+			| dictionary_map_command_expr_paren EXCEPT dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_EXCEPT;
+				n->options = 0;
+				n->left = $1;
+				n->right = $3;
+				$$ = n;
+			}
+			| dictionary_map_command_expr_paren INTERSECT dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_INTERSECT;
+				n->options = 0;
+				n->left = $1;
+				n->right = $3;
+				$$ = n;
+			}
+			| dictionary_map_command_expr_paren MAP BY dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_MAPBY;
+				n->options = 0;
+				n->left = $1;
+				n->right = $4;
+				$$ = n;
+			}
+		;
+
+dictionary_map_command_expr_paren:
+			'(' dictionary_map_command ')'	{ $$ = $2; }
+			| dictionary_map_dict			{ $$ = $1; }
+		;
+
+dictionary_map_dict:
+			any_name
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERAND;
+				n->dictname = $1;
+				n->options = 0;
+				n->left = n->right = NULL;
+				$$ = n;
+			}
+		;
 
 /*****************************************************************************
  *
@@ -14728,6 +15010,7 @@ unreserved_keyword:
 			| LOCK_P
 			| LOCKED
 			| LOGGED
+			| MAP
 			| MAPPING
 			| MATCH
 			| MATERIALIZED
@@ -14831,6 +15114,7 @@ unreserved_keyword:
 			| STATISTICS
 			| STDIN
 			| STDOUT
+			| STOPWORD
 			| STORAGE
 			| STRICT_P
 			| STRIP_P
diff --git a/src/backend/tsearch/Makefile b/src/backend/tsearch/Makefile
index 34fe4c5..24e47f2 100644
--- a/src/backend/tsearch/Makefile
+++ b/src/backend/tsearch/Makefile
@@ -26,7 +26,7 @@ DICTFILES_PATH=$(addprefix dicts/,$(DICTFILES))
 OBJS = ts_locale.o ts_parse.o wparser.o wparser_def.o dict.o \
 	dict_simple.o dict_synonym.o dict_thesaurus.o \
 	dict_ispell.o regis.o spell.o \
-	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o
+	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o ts_configmap.o
 
 include $(top_srcdir)/src/backend/common.mk
 
diff --git a/src/backend/tsearch/ts_configmap.c b/src/backend/tsearch/ts_configmap.c
new file mode 100644
index 0000000..a7d9e0c
--- /dev/null
+++ b/src/backend/tsearch/ts_configmap.c
@@ -0,0 +1,976 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.c
+ *		internal represtation of text search configuration and utilities for it
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/tsearch/ts_confimap.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include <ctype.h>
+
+#include "access/heapam.h"
+#include "access/genam.h"
+#include "access/htup_details.h"
+#include "access/sysattr.h"
+#include "catalog/indexing.h"
+#include "catalog/pg_ts_dict.h"
+#include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "utils/fmgroids.h"
+
+/*
+ * Used during the parsing of TSMapRuleList from JSONB into internal
+ * datastructures.
+ */
+typedef enum TSMapRuleParseState
+{
+	TSMRPS_BEGINING,
+	TSMRPS_IN_CASES_ARRAY,
+	TSMRPS_IN_CASE,
+	TSMRPS_IN_CONDITION,
+	TSMRPS_IN_COMMAND,
+	TSMRPS_IN_EXPRESSION
+} TSMapRuleParseState;
+
+typedef enum TSMapRuleParseNodeType
+{
+	TSMRPT_UNKNOWN,
+	TSMRPT_NUMERIC,
+	TSMRPT_EXPRESSION,
+	TSMRPT_RULE_LIST,
+	TSMRPT_RULE,
+	TSMRPT_COMMAND,
+	TSMRPT_CONDITION,
+	TSMRPT_BOOL
+} TSMapRuleParseNodeType;
+
+typedef struct TSMapParseNode
+{
+	TSMapRuleParseNodeType type;
+	union
+	{
+		int			num_val;
+		bool		bool_val;
+		TSMapRule  *rule_val;
+		TSMapCommand *command_val;
+		TSMapRuleList *rule_list_val;
+		TSMapCondition *condition_val;
+		TSMapExpression *expression_val;
+	};
+} TSMapParseNode;
+
+static JsonbValue *TSMapToJsonbValue(TSMapRuleList *rules, JsonbParseState *jsonb_state);
+static TSMapParseNode *JsonbToTSMapParse(JsonbContainer *root, TSMapRuleParseState *parse_state);
+
+static void
+TSMapPrintDictName(Oid dictId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_ts_dict dict;
+
+	maprel = heap_open(TSDictionaryRelationId, AccessShareLock);
+	mapidx = index_open(TSDictionaryOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(dictId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	dict = (Form_pg_ts_dict) GETSTRUCT(maptup);
+	appendStringInfoString(result, dict->dictname.data);
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+static void
+TSMapExpressionPrint(TSMapExpression *expression, StringInfo result)
+{
+	if (expression->dictionary == InvalidOid && expression->options != 0)
+		appendStringInfoChar(result, '(');
+
+	if (expression->left)
+	{
+		if (expression->left->operator != 0 && expression->left->operator < expression->operator)
+			appendStringInfoChar(result, '(');
+
+		TSMapExpressionPrint(expression->left, result);
+
+		if (expression->left->operator != 0 && expression->left->operator < expression->operator)
+			appendStringInfoChar(result, ')');
+	}
+
+	switch (expression->operator)
+	{
+		case DICTMAP_OP_OR:
+			appendStringInfoString(result, " OR ");
+			break;
+		case DICTMAP_OP_AND:
+			appendStringInfoString(result, " AND ");
+			break;
+		case DICTMAP_OP_NOT:
+			appendStringInfoString(result, " NOT ");
+			break;
+		case DICTMAP_OP_UNION:
+			appendStringInfoString(result, " UNION ");
+			break;
+		case DICTMAP_OP_EXCEPT:
+			appendStringInfoString(result, " EXCEPT ");
+			break;
+		case DICTMAP_OP_INTERSECT:
+			appendStringInfoString(result, " INTERSECT ");
+			break;
+		case DICTMAP_OP_MAPBY:
+			appendStringInfoString(result, " MAP BY ");
+			break;
+	}
+
+	if (expression->right)
+	{
+		if (expression->right->operator != 0 && expression->right->operator < expression->operator)
+			appendStringInfoChar(result, '(');
+
+		TSMapExpressionPrint(expression->right, result);
+
+		if (expression->right->operator != 0 && expression->right->operator < expression->operator)
+			appendStringInfoChar(result, ')');
+	}
+
+	if (expression->dictionary == InvalidOid && expression->options != 0)
+		appendStringInfoChar(result, ')');
+
+	if (expression->dictionary != InvalidOid || expression->options != 0)
+	{
+		if (expression->dictionary != InvalidOid)
+			TSMapPrintDictName(expression->dictionary, result);
+		if (expression->options != (DICTMAP_OPT_NOT | DICTMAP_OPT_IS_NULL | DICTMAP_OPT_IS_STOP))
+		{
+			if (expression->options != 0)
+				appendStringInfoString(result, " IS ");
+			if (expression->options & DICTMAP_OPT_NOT)
+				appendStringInfoString(result, "NOT ");
+			if (expression->options & DICTMAP_OPT_IS_NULL)
+				appendStringInfoString(result, "NULL ");
+			if (expression->options & DICTMAP_OPT_IS_STOP)
+				appendStringInfoString(result, "STOPWORD ");
+		}
+	}
+}
+
+void
+TSMapPrintRule(TSMapRule *rule, StringInfo result, int depth)
+{
+	int			i;
+
+	if (rule->dictionary != InvalidOid)
+	{
+		TSMapPrintDictName(rule->dictionary, result);
+	}
+	else if (rule->condition.expression->is_true)
+	{
+		for (i = 0; i < depth; i++)
+			appendStringInfoChar(result, '\t');
+		appendStringInfoString(result, "ELSE ");
+	}
+	else
+	{
+		for (i = 0; i < depth; i++)
+			appendStringInfoChar(result, '\t');
+		appendStringInfoString(result, "WHEN ");
+		TSMapExpressionPrint(rule->condition.expression, result);
+		appendStringInfoString(result, " THEN\n");
+		for (i = 0; i < depth + 1; i++)
+			appendStringInfoString(result, "\t");
+	}
+
+	if (rule->command.is_expression)
+	{
+		TSMapExpressionPrint(rule->command.expression, result);
+	}
+	else if (rule->dictionary == InvalidOid)
+	{
+		TSMapPrintRuleList(rule->command.ruleList, result, depth + 1);
+	}
+}
+
+void
+TSMapPrintRuleList(TSMapRuleList *rules, StringInfo result, int depth)
+{
+	int			i;
+
+	for (i = 0; i < rules->count; i++)
+	{
+		if (rules->data[i].dictionary != InvalidOid)	/* Comma-separated
+														 * configuration syntax */
+		{
+			if (i > 0)
+				appendStringInfoString(result, ", ");
+			TSMapPrintDictName(rules->data[i].dictionary, result);
+		}
+		else
+		{
+			if (i == 0)
+			{
+				int			j;
+
+				for (j = 0; j < depth; j++)
+					appendStringInfoChar(result, '\t');
+				appendStringInfoString(result, "CASE\n");
+			}
+			else
+				appendStringInfoChar(result, '\n');
+			TSMapPrintRule(&rules->data[i], result, depth + 1);
+		}
+	}
+
+	if (rules->data[0].dictionary == InvalidOid)
+	{
+		appendStringInfoChar(result, '\n');
+		for (i = 0; i < depth; i++)
+			appendStringInfoChar(result, '\t');
+		appendStringInfoString(result, "END");
+	}
+}
+
+Datum
+dictionary_map_to_text(PG_FUNCTION_ARGS)
+{
+	Oid			cfgOid = PG_GETARG_OID(0);
+	int32		tokentype = PG_GETARG_INT32(1);
+	StringInfo	rawResult;
+	text	   *result = NULL;
+	TSConfigCacheEntry *cacheEntry;
+
+	cacheEntry = lookup_ts_config_cache(cfgOid);
+	rawResult = makeStringInfo();
+	initStringInfo(rawResult);
+
+	if (cacheEntry->lenmap > tokentype && cacheEntry->map[tokentype]->count > 0)
+	{
+		TSMapRuleList *rules = cacheEntry->map[tokentype];
+
+		TSMapPrintRuleList(rules, rawResult, 0);
+	}
+
+	if (rawResult)
+	{
+		result = cstring_to_text(rawResult->data);
+		pfree(rawResult);
+	}
+
+	PG_RETURN_TEXT_P(result);
+}
+
+static JsonbValue *
+TSIntToJsonbValue(int int_value)
+{
+	char		buffer[16];
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	memset(buffer, 0, sizeof(char) * 16);
+
+	pg_ltoa(int_value, buffer);
+	value->type = jbvNumeric;
+	value->val.numeric = DatumGetNumeric(DirectFunctionCall3(
+															 numeric_in,
+															 CStringGetDatum(buffer),
+															 ObjectIdGetDatum(InvalidOid),
+															 Int32GetDatum(-1)
+															 ));
+	return value;
+
+}
+
+static JsonbValue *
+TSExpressionToJsonb(TSMapExpression *expression, JsonbParseState *jsonb_state)
+{
+	if (expression == NULL)
+		return NULL;
+	if (expression->dictionary != InvalidOid)
+	{
+		JsonbValue	key;
+		JsonbValue *value = NULL;
+
+		pushJsonbValue(&jsonb_state, WJB_BEGIN_OBJECT, NULL);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("options");
+		key.val.string.val = "options";
+		value = TSIntToJsonbValue(expression->options);
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("dictionary");
+		key.val.string.val = "dictionary";
+		value = TSIntToJsonbValue(expression->dictionary);
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		return pushJsonbValue(&jsonb_state, WJB_END_OBJECT, NULL);
+	}
+	else if (expression->is_true)
+	{
+		JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+		value->type = jbvBool;
+		value->val.boolean = true;
+		return value;
+	}
+	else
+	{
+		JsonbValue	key;
+		JsonbValue *value = NULL;
+
+		pushJsonbValue(&jsonb_state, WJB_BEGIN_OBJECT, NULL);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("operator");
+		key.val.string.val = "operator";
+		value = TSIntToJsonbValue(expression->operator);
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("options");
+		key.val.string.val = "options";
+		value = TSIntToJsonbValue(expression->options);
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("left");
+		key.val.string.val = "left";
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		value = TSExpressionToJsonb(expression->left, jsonb_state);
+		if (value && IsAJsonbScalar(value))
+			pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("right");
+		key.val.string.val = "right";
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		value = TSExpressionToJsonb(expression->right, jsonb_state);
+		if (value && IsAJsonbScalar(value))
+			pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		return pushJsonbValue(&jsonb_state, WJB_END_OBJECT, NULL);
+	}
+}
+
+static JsonbValue *
+TSRuleToJsonbValue(TSMapRule *rule, JsonbParseState *jsonb_state)
+{
+	if (rule->dictionary != InvalidOid)
+	{
+		return TSIntToJsonbValue(rule->dictionary);
+	}
+	else
+	{
+		JsonbValue	key;
+		JsonbValue *value = NULL;
+
+		pushJsonbValue(&jsonb_state, WJB_BEGIN_OBJECT, NULL);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("condition");
+		key.val.string.val = "condition";
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		value = TSExpressionToJsonb(rule->condition.expression, jsonb_state);
+
+		if (IsAJsonbScalar(value))
+			pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("command");
+		key.val.string.val = "command";
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		if (rule->command.is_expression)
+			value = TSExpressionToJsonb(rule->command.expression, jsonb_state);
+		else
+			value = TSMapToJsonbValue(rule->command.ruleList, jsonb_state);
+
+		if (IsAJsonbScalar(value))
+			pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		return pushJsonbValue(&jsonb_state, WJB_END_OBJECT, NULL);
+	}
+}
+
+static JsonbValue *
+TSMapToJsonbValue(TSMapRuleList *rules, JsonbParseState *jsonb_state)
+{
+	JsonbValue *out;
+	int			i;
+
+	pushJsonbValue(&jsonb_state, WJB_BEGIN_ARRAY, NULL);
+	for (i = 0; i < rules->count; i++)
+	{
+		JsonbValue *value = TSRuleToJsonbValue(&rules->data[i], jsonb_state);
+
+		if (IsAJsonbScalar(value))
+			pushJsonbValue(&jsonb_state, WJB_ELEM, value);
+	}
+	out = pushJsonbValue(&jsonb_state, WJB_END_ARRAY, NULL);
+	return out;
+}
+
+Jsonb *
+TSMapToJsonb(TSMapRuleList *rules)
+{
+	JsonbParseState *jsonb_state = NULL;
+	JsonbValue *out;
+	Jsonb	   *result;
+
+	out = TSMapToJsonbValue(rules, jsonb_state);
+
+	result = JsonbValueToJsonb(out);
+	return result;
+}
+
+static inline TSMapExpression *
+JsonbToTSMapGetExpression(TSMapParseNode *node)
+{
+	TSMapExpression *result;
+
+	if (node->type == TSMRPT_NUMERIC)
+	{
+		result = palloc0(sizeof(TSMapExpression));
+		result->dictionary = node->num_val;
+	}
+	else if (node->type == TSMRPT_BOOL)
+	{
+		result = palloc0(sizeof(TSMapExpression));
+		result->is_true = node->bool_val;
+	}
+	else
+		result = node->expression_val;
+
+	pfree(node);
+
+	return result;
+}
+
+static TSMapParseNode *
+JsonbToTSMapParseObject(JsonbValue *value, TSMapRuleParseState *parse_state)
+{
+	TSMapParseNode *result = palloc0(sizeof(TSMapParseNode));
+	char	   *str;
+
+	switch (value->type)
+	{
+		case jbvNumeric:
+			result->type = TSMRPT_NUMERIC;
+			str = DatumGetCString(
+								  DirectFunctionCall1(numeric_out, NumericGetDatum(value->val.numeric)));
+			result->num_val = pg_atoi(str, sizeof(result->num_val), 0);
+			break;
+		case jbvArray:
+			Assert(*parse_state == TSMRPS_IN_COMMAND);
+		case jbvBinary:
+			result = JsonbToTSMapParse(value->val.binary.data, parse_state);
+			break;
+		case jbvBool:
+			result->type = TSMRPT_BOOL;
+			result->bool_val = value->val.boolean;
+			break;
+		case jbvObject:
+		case jbvNull:
+		case jbvString:
+			break;
+	}
+	return result;
+}
+
+static TSMapParseNode *
+JsonbToTSMapParse(JsonbContainer *root, TSMapRuleParseState *parse_state)
+{
+	JsonbIteratorToken r;
+	JsonbValue	val;
+	JsonbIterator *it;
+	TSMapParseNode *result;
+	TSMapParseNode *nested_result;
+	char	   *key;
+	TSMapRuleList *rule_list = NULL;
+
+	it = JsonbIteratorInit(root);
+	result = palloc0(sizeof(TSMapParseNode));
+	result->type = TSMRPT_UNKNOWN;
+	while ((r = JsonbIteratorNext(&it, &val, true)) != WJB_DONE)
+	{
+		switch (r)
+		{
+			case WJB_BEGIN_ARRAY:
+				if (*parse_state == TSMRPS_BEGINING || *parse_state == TSMRPS_IN_EXPRESSION)
+				{
+					*parse_state = TSMRPS_IN_CASES_ARRAY;
+					rule_list = palloc0(sizeof(TSMapRuleList));
+				}
+				break;
+			case WJB_KEY:
+				key = palloc0(sizeof(char) * (val.val.string.len + 1));
+				memcpy(key, val.val.string.val, sizeof(char) * val.val.string.len);
+
+				r = JsonbIteratorNext(&it, &val, true);
+				if (*parse_state == TSMRPS_IN_CASE)
+				{
+					if (strcmp(key, "command") == 0)
+						*parse_state = TSMRPS_IN_EXPRESSION;
+					else if (strcmp(key, "condition") == 0)
+						*parse_state = TSMRPS_IN_EXPRESSION;
+				}
+
+				nested_result = JsonbToTSMapParseObject(&val, parse_state);
+
+				if (result->type == TSMRPT_RULE)
+				{
+					if (strcmp(key, "command") == 0)
+					{
+						result->rule_val->command.is_expression = nested_result->type == TSMRPT_EXPRESSION ||
+							nested_result->type == TSMRPT_NUMERIC;
+
+						if (result->rule_val->command.is_expression)
+							result->rule_val->command.expression = JsonbToTSMapGetExpression(nested_result);
+						else
+							result->rule_val->command.ruleList = nested_result->rule_list_val;
+					}
+					else if (strcmp(key, "condition") == 0)
+					{
+						result->rule_val->condition.expression = JsonbToTSMapGetExpression(nested_result);
+					}
+					*parse_state = TSMRPS_IN_CASE;
+				}
+				else if (result->type == TSMRPT_COMMAND)
+				{
+					result->command_val->is_expression = nested_result->type == TSMRPT_EXPRESSION;
+					if (result->command_val->is_expression)
+						result->command_val->expression = JsonbToTSMapGetExpression(nested_result);
+					else
+						result->command_val->ruleList = nested_result->rule_list_val;
+					*parse_state = TSMRPS_IN_COMMAND;
+				}
+				else if (result->type == TSMRPT_CONDITION)
+				{
+					result->condition_val->expression = JsonbToTSMapGetExpression(nested_result);
+					*parse_state = TSMRPS_IN_COMMAND;
+				}
+				else if (result->type == TSMRPT_EXPRESSION)
+				{
+					if (strcmp(key, "left") == 0)
+						result->expression_val->left = JsonbToTSMapGetExpression(nested_result);
+					else if (strcmp(key, "right") == 0)
+						result->expression_val->right = JsonbToTSMapGetExpression(nested_result);
+					else if (strcmp(key, "operator") == 0)
+						result->expression_val->operator = nested_result->num_val;
+					else if (strcmp(key, "options") == 0)
+						result->expression_val->options = nested_result->num_val;
+					else if (strcmp(key, "dictionary") == 0)
+						result->expression_val->dictionary = nested_result->num_val;
+				}
+
+				break;
+			case WJB_BEGIN_OBJECT:
+				if (*parse_state == TSMRPS_IN_CASES_ARRAY)
+				{
+					*parse_state = TSMRPS_IN_CASE;
+					result->type = TSMRPT_RULE;
+					result->rule_val = palloc0(sizeof(TSMapRule));
+				}
+				else if (*parse_state == TSMRPS_IN_COMMAND)
+				{
+					result->type = TSMRPT_COMMAND;
+					result->command_val = palloc0(sizeof(TSMapCommand));
+				}
+				else if (*parse_state == TSMRPS_IN_CONDITION)
+				{
+					result->type = TSMRPT_CONDITION;
+					result->condition_val = palloc0(sizeof(TSMapCondition));
+				}
+				else if (*parse_state == TSMRPS_IN_EXPRESSION)
+				{
+					result->type = TSMRPT_EXPRESSION;
+					result->expression_val = palloc0(sizeof(TSMapExpression));
+				}
+				break;
+			case WJB_END_OBJECT:
+				if (*parse_state == TSMRPS_IN_CASE)
+					*parse_state = TSMRPS_IN_CASES_ARRAY;
+				else if (*parse_state == TSMRPS_IN_CONDITION || *parse_state == TSMRPS_IN_COMMAND)
+					*parse_state = TSMRPS_IN_CASE;
+				if (rule_list && result->type == TSMRPT_RULE)
+				{
+					rule_list->count++;
+					if (rule_list->data)
+						rule_list->data = repalloc(rule_list->data, sizeof(TSMapRule) * rule_list->count);
+					else
+						rule_list->data = palloc0(sizeof(TSMapRule) * rule_list->count);
+					memcpy(rule_list->data + rule_list->count - 1, result->rule_val, sizeof(TSMapRule));
+				}
+				else
+					return result;
+			case WJB_END_ARRAY:
+				break;
+			default:
+				nested_result = JsonbToTSMapParseObject(&val, parse_state);
+				if (nested_result->type == TSMRPT_NUMERIC)
+				{
+					if (*parse_state == TSMRPS_IN_CASES_ARRAY)
+					{
+						/*
+						 * Add dictionary Oid into array (comma-separated
+						 * configuration)
+						 */
+						rule_list->count++;
+						if (rule_list->data)
+							rule_list->data = repalloc(rule_list->data, sizeof(TSMapRule) * rule_list->count);
+						else
+							rule_list->data = palloc0(sizeof(TSMapRule) * rule_list->count);
+						memset(rule_list->data + rule_list->count - 1, 0, sizeof(TSMapRule));
+						rule_list->data[rule_list->count - 1].dictionary = nested_result->num_val;
+					}
+					else if (result->type == TSMRPT_UNKNOWN && *parse_state == TSMRPS_IN_EXPRESSION)
+					{
+						result->type = TSMRPT_EXPRESSION;
+						result->expression_val = palloc0(sizeof(TSMapExpression));
+					}
+					if (result->type == TSMRPT_EXPRESSION)
+						result->expression_val->dictionary = nested_result->num_val;
+				}
+				else if (nested_result->type == TSMRPT_RULE && rule_list)
+				{
+					rule_list->count++;
+					if (rule_list->data)
+						rule_list->data = repalloc(rule_list->data, sizeof(TSMapRule) * rule_list->count);
+					else
+						rule_list->data = palloc0(sizeof(TSMapRule) * rule_list->count);
+					memcpy(rule_list->data + rule_list->count - 1, nested_result->rule_val, sizeof(TSMapRule));
+				}
+				break;
+		}
+	}
+	result->type = TSMRPT_RULE_LIST;
+	result->rule_list_val = rule_list;
+	return result;
+}
+
+TSMapRuleList *
+JsonbToTSMap(Jsonb *json)
+{
+	JsonbContainer *root = &json->root;
+	TSMapRuleList *result = palloc0(sizeof(TSMapRuleList));
+	TSMapRuleParseState parse_state = TSMRPS_BEGINING;
+	TSMapParseNode *parsing_result;
+
+	parsing_result = JsonbToTSMapParse(root, &parse_state);
+
+	Assert(parsing_result->type == TSMRPT_RULE_LIST);
+	result = parsing_result->rule_list_val;
+	pfree(parsing_result);
+
+	return result;
+}
+
+static void
+TSMapReplaceDictionaryParseExpression(TSMapExpression *expr, Oid oldDict, Oid newDict)
+{
+	if (expr->left)
+		TSMapReplaceDictionaryParseExpression(expr->left, oldDict, newDict);
+	if (expr->right)
+		TSMapReplaceDictionaryParseExpression(expr->right, oldDict, newDict);
+
+	if (expr->dictionary == oldDict)
+		expr->dictionary = newDict;
+}
+
+static void
+TSMapReplaceDictionaryParseMap(TSMapRule *rule, Oid oldDict, Oid newDict)
+{
+	if (rule->dictionary != InvalidOid)
+	{
+		Oid		   *result;
+
+		result = palloc0(sizeof(Oid) * 2);
+		result[0] = rule->dictionary;
+		result[1] = InvalidOid;
+	}
+	else
+	{
+		TSMapReplaceDictionaryParseExpression(rule->condition.expression, oldDict, newDict);
+
+		if (rule->command.is_expression)
+			TSMapReplaceDictionaryParseExpression(rule->command.expression, oldDict, newDict);
+		else
+			TSMapReplaceDictionary(rule->command.ruleList, oldDict, newDict);
+	}
+}
+
+void
+TSMapReplaceDictionary(TSMapRuleList *rules, Oid oldDict, Oid newDict)
+{
+	int			i;
+
+	for (i = 0; i < rules->count; i++)
+		TSMapReplaceDictionaryParseMap(&rules->data[i], oldDict, newDict);
+}
+
+static Oid *
+TSMapGetDictionariesParseExpression(TSMapExpression *expr)
+{
+	Oid		   *left_res;
+	Oid		   *right_res;
+	Oid		   *result;
+
+	left_res = right_res = NULL;
+
+	if (expr->left && expr->right)
+	{
+		Oid		   *ptr;
+		int			count_l;
+		int			count_r;
+
+		left_res = TSMapGetDictionariesParseExpression(expr->left);
+		right_res = TSMapGetDictionariesParseExpression(expr->right);
+
+		for (ptr = left_res, count_l = 0; *ptr != InvalidOid; ptr++)
+			count_l++;
+		for (ptr = right_res, count_r = 0; *ptr != InvalidOid; ptr++)
+			count_r++;
+
+		result = palloc0(sizeof(Oid) * (count_l + count_r + 1));
+		memcpy(result, left_res, sizeof(Oid) * count_l);
+		memcpy(result + count_l, right_res, sizeof(Oid) * count_r);
+		result[count_l + count_r] = InvalidOid;
+
+		pfree(left_res);
+		pfree(right_res);
+	}
+	else
+	{
+		result = palloc0(sizeof(Oid) * 2);
+		result[0] = expr->dictionary;
+		result[1] = InvalidOid;
+	}
+
+	return result;
+}
+
+static Oid *
+TSMapGetDictionariesParseRule(TSMapRule *rule)
+{
+	Oid		   *result;
+
+	if (rule->dictionary)
+	{
+		result = palloc0(sizeof(Oid) * 2);
+		result[0] = rule->dictionary;
+		result[1] = InvalidOid;
+	}
+	else
+	{
+		if (rule->command.is_expression)
+			result = TSMapGetDictionariesParseExpression(rule->command.expression);
+		else
+			result = TSMapGetDictionariesList(rule->command.ruleList);
+	}
+	return result;
+}
+
+Oid *
+TSMapGetDictionariesList(TSMapRuleList *rules)
+{
+	int			i;
+	Oid		  **results_arr;
+	int		   *sizes;
+	Oid		   *result;
+	int			size;
+	int			offset;
+
+	results_arr = palloc0(sizeof(Oid *) * rules->count);
+	sizes = palloc0(sizeof(int) * rules->count);
+	size = 0;
+	for (i = 0; i < rules->count; i++)
+	{
+		int			count;
+		Oid		   *ptr;
+
+		results_arr[i] = TSMapGetDictionariesParseRule(&rules->data[i]);
+
+		for (count = 0, ptr = results_arr[i]; *ptr != InvalidOid; ptr++)
+			count++;
+
+		sizes[i] = count;
+		size += count;
+	}
+
+	result = palloc(sizeof(Oid) * (size + 1));
+	offset = 0;
+	for (i = 0; i < rules->count; i++)
+	{
+		memcpy(result + offset, results_arr[i], sizeof(Oid) * sizes[i]);
+		offset += sizes[i];
+		pfree(results_arr[i]);
+	}
+	result[offset] = InvalidOid;
+
+	pfree(results_arr);
+	pfree(sizes);
+
+	return result;
+}
+
+ListDictionary *
+TSMapGetListDictionary(TSMapRuleList *rules)
+{
+	ListDictionary *result = palloc0(sizeof(ListDictionary));
+	Oid		   *oids = TSMapGetDictionariesList(rules);
+	int			i;
+	int			count;
+	Oid		   *ptr;
+
+	ptr = oids;
+	count = 0;
+	while (*ptr != InvalidOid)
+	{
+		count++;
+		ptr++;
+	}
+
+	result->len = count;
+	result->dictIds = palloc0(sizeof(Oid) * result->len);
+	ptr = oids;
+	i = 0;
+	while (*ptr != InvalidOid)
+		result->dictIds[i++] = *(ptr++);
+
+	return result;
+}
+
+static TSMapExpression *
+TSMapExpressionMoveToMemoryContext(TSMapExpression *expr, MemoryContext context)
+{
+	TSMapExpression *result;
+
+	if (expr == NULL)
+		return NULL;
+	result = MemoryContextAlloc(context, sizeof(TSMapExpression));
+	memset(result, 0, sizeof(TSMapExpression));
+	if (expr->dictionary != InvalidOid || expr->is_true)
+	{
+		result->dictionary = expr->dictionary;
+		result->is_true = expr->is_true;
+		result->options = expr->options;
+		result->left = result->right = NULL;
+		result->operator = 0;
+	}
+	else
+	{
+		result->left = TSMapExpressionMoveToMemoryContext(expr->left, context);
+		result->right = TSMapExpressionMoveToMemoryContext(expr->right, context);
+		result->operator = expr->operator;
+		result->options = expr->options;
+		result->dictionary = InvalidOid;
+		result->is_true = false;
+	}
+	return result;
+}
+
+static TSMapRule
+TSMapRuleMoveToMemoryContext(TSMapRule *rule, MemoryContext context)
+{
+	TSMapRule	result;
+
+	memset(&result, 0, sizeof(TSMapRule));
+
+	if (rule->dictionary)
+	{
+		result.dictionary = rule->dictionary;
+	}
+	else
+	{
+		result.condition.expression = TSMapExpressionMoveToMemoryContext(rule->condition.expression, context);
+
+		result.command.is_expression = rule->command.is_expression;
+		if (rule->command.is_expression)
+			result.command.expression = TSMapExpressionMoveToMemoryContext(rule->command.expression, context);
+		else
+			result.command.ruleList = TSMapMoveToMemoryContext(rule->command.ruleList, context);
+	}
+
+	return result;
+}
+
+TSMapRuleList *
+TSMapMoveToMemoryContext(TSMapRuleList *rules, MemoryContext context)
+{
+	int			i;
+	TSMapRuleList *result = MemoryContextAlloc(context, sizeof(TSMapRuleList));
+
+	memset(result, 0, sizeof(TSMapRuleList));
+
+	result->count = rules->count;
+	result->data = MemoryContextAlloc(context, sizeof(TSMapRule) * result->count);
+
+	for (i = 0; i < result->count; i++)
+		result->data[i] = TSMapRuleMoveToMemoryContext(&rules->data[i], context);
+
+	return result;
+}
+
+static void
+TSMapExpressionFree(TSMapExpression *expression)
+{
+	if (expression->left)
+		TSMapExpressionFree(expression->left);
+	if (expression->right)
+		TSMapExpressionFree(expression->right);
+	pfree(expression);
+}
+
+static void
+TSMapRuleFree(TSMapRule rule)
+{
+	if (rule.dictionary == InvalidOid)
+	{
+		if (rule.command.is_expression)
+			TSMapExpressionFree(rule.command.expression);
+		else
+			TSMapFree(rule.command.ruleList);
+
+		TSMapExpressionFree(rule.condition.expression);
+	}
+}
+
+void
+TSMapFree(TSMapRuleList * rules)
+{
+	int			i;
+
+	for (i = 0; i < rules->count; i++)
+		TSMapRuleFree(rules->data[i]);
+	pfree(rules->data);
+	pfree(rules);
+}
diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c
index ad5dddf..c71658b 100644
--- a/src/backend/tsearch/ts_parse.c
+++ b/src/backend/tsearch/ts_parse.c
@@ -16,6 +16,10 @@
 
 #include "tsearch/ts_cache.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+
+#include "funcapi.h"
 
 #define IGNORE_LONGLEXEME	1
 
@@ -28,328 +32,1296 @@ typedef struct ParsedLex
 	int			type;
 	char	   *lemm;
 	int			lenlemm;
+	int			maplen;
+	bool	   *accepted;
+	bool	   *rejected;
+	bool	   *notFinished;
+	bool	   *holdAccepted;
 	struct ParsedLex *next;
+	TSMapRule  *relatedRule;
 } ParsedLex;
 
-typedef struct ListParsedLex
-{
-	ParsedLex  *head;
-	ParsedLex  *tail;
-} ListParsedLex;
+typedef struct ListParsedLex
+{
+	ParsedLex  *head;
+	ParsedLex  *tail;
+} ListParsedLex;
+
+typedef struct DictState
+{
+	Oid			relatedDictionary;
+	DictSubState subState;
+	ListParsedLex acceptedTokens;	/* Tokens which are processed and
+									 * accepted, used in last returned result
+									 * by the dictionary */
+	ListParsedLex intermediateTokens;	/* Tokens which are not accepted, but
+										 * were processed by thesaurus-like
+										 * dictionry */
+	bool		storeToAccepted;	/* Should current token be appended to
+									 * accepted or intermediate tokens */
+	bool		processed;		/* Is the dictionary take control during
+								 * current token processing */
+	TSLexeme   *tmpResult;		/* Last result retued by thesaurus-like
+								 * dictionary, if dictionary still waiting for
+								 * more lexemes */
+} DictState;
+
+typedef struct DictStateList
+{
+	int			listLength;
+	DictState  *states;
+} DictStateList;
+
+typedef struct LexemesBufferEntry
+{
+	Oid			dictId;
+	ParsedLex  *token;
+	TSLexeme   *data;
+} LexemesBufferEntry;
+
+typedef struct LexemesBuffer
+{
+	int			size;
+	LexemesBufferEntry *data;
+} LexemesBuffer;
+
+typedef struct ResultStorage
+{
+	TSLexeme   *lexemes;		/* Processed lexemes, which is not yet
+								 * accepted */
+	TSLexeme   *accepted;
+} ResultStorage;
+
+typedef struct LexizeData
+{
+	TSConfigCacheEntry *cfg;
+	DictSubState dictState;
+	DictStateList dslist;
+	ListParsedLex towork;		/* current list to work */
+	ListParsedLex waste;		/* list of lexemes that already lexized */
+	LexemesBuffer buffer;
+	ResultStorage delayedResults;
+	Oid			skipDictionary;
+} LexizeData;
+
+typedef struct TSDebugContext
+{
+	TSConfigCacheEntry *cfg;
+	TSParserCacheEntry *prsobj;
+	LexDescr   *tokenTypes;
+	void	   *prsdata;
+	LexizeData	ldata;
+	int			tokentype;		/* Last token tokentype */
+	TSLexeme   *savedLexemes;	/* Last token lexemes stored for ts_debug
+								 * output */
+	ParsedLex  *leftTokens;		/* Corresponded ParsedLex */
+	TSMapRule  *rule;			/* Rule which produced output */
+} TSDebugContext;
+
+static TSLexeme *LexizeExecMapBy(LexizeData *ld, ParsedLex *token, TSMapExpression *left, TSMapExpression *right);
+
+static void
+LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+{
+	ld->cfg = cfg;
+	ld->skipDictionary = InvalidOid;
+	ld->towork.head = ld->towork.tail = NULL;
+	ld->waste.head = ld->waste.tail = NULL;
+	ld->dslist.listLength = 0;
+	ld->dslist.states = NULL;
+	ld->buffer.size = 0;
+	ld->buffer.data = NULL;
+	ld->delayedResults.lexemes = NULL;
+	ld->delayedResults.accepted = NULL;
+}
+
+static void
+LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
+{
+	if (list->tail)
+	{
+		list->tail->next = newpl;
+		list->tail = newpl;
+	}
+	else
+		list->head = list->tail = newpl;
+	newpl->next = NULL;
+}
+
+static void
+LPLAddTailCopy(ListParsedLex *list, ParsedLex *newpl)
+{
+	ParsedLex  *copy = palloc0(sizeof(ParsedLex));
+
+	copy->lenlemm = newpl->lenlemm;
+	copy->type = newpl->type;
+	copy->lemm = newpl->lemm;
+	copy->relatedRule = newpl->relatedRule;
+	copy->next = NULL;
+
+	if (list->tail)
+	{
+		list->tail->next = copy;
+		list->tail = copy;
+	}
+	else
+		list->head = list->tail = copy;
+}
+
+static ParsedLex *
+LPLRemoveHead(ListParsedLex *list)
+{
+	ParsedLex  *res = list->head;
+
+	if (list->head)
+		list->head = list->head->next;
+
+	if (list->head == NULL)
+		list->tail = NULL;
+
+	return res;
+}
+
+static void
+LPLClear(ListParsedLex *list)
+{
+	ParsedLex  *tmp,
+			   *ptr = list->head;
+
+	while (ptr)
+	{
+		tmp = ptr->next;
+		pfree(ptr);
+		ptr = tmp;
+	}
+
+	list->head = list->tail = NULL;
+}
+
+static void
+LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
+{
+	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+	newpl->type = type;
+	newpl->lemm = lemm;
+	newpl->lenlemm = lenlemm;
+	newpl->relatedRule = NULL;
+	LPLAddTail(&ld->towork, newpl);
+}
+
+static void
+RemoveHead(LexizeData *ld)
+{
+	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+}
+
+static void
+setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+{
+	if (correspondLexem)
+	{
+		*correspondLexem = ld->waste.head;
+	}
+	else
+	{
+		LPLClear(&ld->waste);
+	}
+	ld->waste.head = ld->waste.tail = NULL;
+}
+
+static DictState *
+DictStateListGet(DictStateList *list, Oid dictId)
+{
+	int			i;
+	DictState  *result = NULL;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			result = &list->states[i];
+
+	return result;
+}
+
+static void
+DictStateListRemove(DictStateList *list, Oid dictId)
+{
+	int			i;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			break;
+
+	if (i != list->listLength)
+	{
+		memcpy(list->states + i, list->states + i + 1, sizeof(DictState) * (list->listLength - i - 1));
+		list->listLength--;
+		if (list->listLength == 0)
+			list->states = NULL;
+		else
+			list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	}
+}
+
+static DictState *
+DictStateListAdd(DictStateList *list, DictState *state)
+{
+	DictStateListRemove(list, state->relatedDictionary);
+
+	list->listLength++;
+	if (list->states)
+		list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	else
+		list->states = palloc0(sizeof(DictState) * list->listLength);
+
+	memcpy(list->states + list->listLength - 1, state, sizeof(DictState));
+
+	return list->states + list->listLength - 1;
+}
+
+static void
+DictStateListClear(DictStateList *list)
+{
+	list->listLength = 0;
+	if (list->states)
+		pfree(list->states);
+	list->states = NULL;
+}
+
+static bool
+LexemesBufferContains(LexemesBuffer *buffer, Oid dictId, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (buffer->data[i].dictId == dictId && buffer->data[i].token == token)
+			return true;
+
+	return false;
+}
+
+static TSLexeme *
+LexemesBufferGet(LexemesBuffer *buffer, Oid dictId, ParsedLex *token)
+{
+	int			i;
+	TSLexeme   *result = NULL;
+
+	for (i = 0; i < buffer->size; i++)
+		if (buffer->data[i].dictId == dictId && buffer->data[i].token == token)
+			result = buffer->data[i].data;
+
+	return result;
+}
+
+static void
+LexemesBufferRemove(LexemesBuffer *buffer, Oid dictId, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (buffer->data[i].dictId == dictId && buffer->data[i].token == token)
+			break;
+
+	if (i != buffer->size)
+	{
+		memcpy(buffer->data + i, buffer->data + i + 1, sizeof(LexemesBufferEntry) * (buffer->size - i - 1));
+		buffer->size--;
+		if (buffer->size == 0)
+			buffer->data = NULL;
+		else
+			buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	}
+}
+
+static void
+LexemesBufferAdd(LexemesBuffer *buffer, Oid dictId, ParsedLex *token, TSLexeme *data)
+{
+	LexemesBufferRemove(buffer, dictId, token);
+
+	buffer->size++;
+	if (buffer->data)
+		buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	else
+		buffer->data = palloc0(sizeof(LexemesBufferEntry) * buffer->size);
+
+	buffer->data[buffer->size - 1].token = token;
+	buffer->data[buffer->size - 1].dictId = dictId;
+	buffer->data[buffer->size - 1].data = data;
+}
+
+static void
+LexemesBufferClear(LexemesBuffer *buffer)
+{
+	buffer->size = 0;
+	if (buffer->data)
+		pfree(buffer->data);
+	buffer->data = NULL;
+}
+
+/*
+ * TSLexeme util functions
+ */
+
+static int
+TSLexemeGetSize(TSLexeme *lex)
+{
+	int			result = 0;
+	TSLexeme   *ptr = lex;
+
+	while (ptr && ptr->lexeme)
+	{
+		result++;
+		ptr++;
+	}
+
+	return result;
+}
+
+/*
+ * Remove same lexemes. Remove copies of whole nvariant groups.
+ */
+static TSLexeme *
+TSLexemeRemoveDuplications(TSLexeme *lexeme)
+{
+	TSLexeme   *res;
+	int			curLexIndex;
+	int			i;
+	int			lexemeSize = TSLexemeGetSize(lexeme);
+	int			shouldCopyCount = lexemeSize;
+	bool	   *shouldCopy;
+
+	if (lexeme == NULL)
+		return NULL;
+
+	shouldCopy = palloc(sizeof(bool) * lexemeSize);
+	memset(shouldCopy, true, sizeof(bool) * lexemeSize);
+
+	for (curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		for (i = curLexIndex + 1; i < lexemeSize; i++)
+		{
+			if (!shouldCopy[i])
+				continue;
+
+			if (strcmp(lexeme[curLexIndex].lexeme, lexeme[i].lexeme) == 0)
+			{
+				if (lexeme[curLexIndex].nvariant == lexeme[i].nvariant)
+				{
+					shouldCopy[i] = false;
+					shouldCopyCount--;
+					continue;
+				}
+				else
+				{
+					/*
+					 * Check for same set of lexemes in another nvariant
+					 * series
+					 */
+					int			nvariantCountL = 0;
+					int			nvariantCountR = 0;
+					int			nvariantOverlap = 1;
+					int			j;
+
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[curLexIndex].nvariant == lexeme[j].nvariant)
+							nvariantCountL++;
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[i].nvariant == lexeme[j].nvariant)
+							nvariantCountR++;
+
+					if (nvariantCountL != nvariantCountR)
+						continue;
+
+					for (j = 1; j < nvariantCountR; j++)
+					{
+						if (strcmp(lexeme[curLexIndex + j].lexeme, lexeme[i + j].lexeme) == 0
+							&& lexeme[curLexIndex + j].nvariant == lexeme[i + j].nvariant)
+							nvariantOverlap++;
+					}
+
+					if (nvariantOverlap != nvariantCountR)
+						continue;
+
+					for (j = 0; j < nvariantCountR; j++)
+					{
+						shouldCopy[i + j] = false;
+					}
+				}
+			}
+		}
+	}
+
+	res = palloc0(sizeof(TSLexeme) * (shouldCopyCount + 1));
+
+	for (i = 0, curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		if (shouldCopy[curLexIndex])
+		{
+			memcpy(res + i, lexeme + curLexIndex, sizeof(TSLexeme));
+			i++;
+		}
+	}
+
+	pfree(shouldCopy);
+	pfree(lexeme);
+	return res;
+}
+
+/*
+ * Combine two lexeme lists with respect to positions
+ */
+static TSLexeme *
+TSLexemeMergePositions(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+	int			left_i = 0;
+	int			right_i = 0;
+	int			left_max_nvariant = 0;
+	int			i;
+
+	if (left == NULL && right == NULL)
+	{
+		result = NULL;
+	}
+	else
+	{
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		for (i = 0; i < right_size; i++)
+			right[i].nvariant += left_max_nvariant;
+		if (right && right[0].flags & TSL_ADDPOS)
+			right[0].flags &= ~TSL_ADDPOS;
+
+		i = 0;
+		while (i < left_size + right_size)
+		{
+			if (left_i < left_size)
+			{
+				do
+				{
+					result[i++] = left[left_i++];
+				} while (left && left[left_i].lexeme && (left[left_i].flags & TSL_ADDPOS) == 0);
+			}
+			if (right_i < right_size)
+			{
+				do
+				{
+					result[i++] = right[right_i++];
+				} while (right && right[right_i].lexeme && (right[right_i].flags & TSL_ADDPOS) == 0);
+			}
+		}
+	}
+	return result;
+}
+
+/*
+ * Split lexemes generated by regular dictionaries and multi-input dictionaries
+ * and combine them with respect to positions
+ */
+static TSLexeme *
+TSLexemeFilterMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *result;
+	TSLexeme   *ptr = lexemes;
+	int			multi_lexemes = 0;
+
+	while (ptr && ptr->lexeme)
+	{
+		if (ptr->flags & TSL_MULTI)
+			multi_lexemes++;
+		ptr++;
+	}
+
+	if (multi_lexemes > 0)
+	{
+		TSLexeme   *lexemes_multi = palloc0(sizeof(TSLexeme) * (multi_lexemes + 1));
+		TSLexeme   *lexemes_rest = palloc0(sizeof(TSLexeme) * (TSLexemeGetSize(lexemes) - multi_lexemes + 1));
+		int			rest_i = 0;
+		int			multi_i = 0;
+
+		ptr = lexemes;
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr->flags & TSL_MULTI)
+				lexemes_multi[multi_i++] = *ptr;
+			else
+				lexemes_rest[rest_i++] = *ptr;
+
+			ptr++;
+		}
+		result = TSLexemeMergePositions(lexemes_rest, lexemes_multi);
+	}
+	else
+	{
+		result = TSLexemeMergePositions(lexemes, NULL);
+	}
+
+	return result;
+}
+
+/*
+ * Mark lexemes generated by multi-input (thesaurus-like) dictionary
+ */
+static void
+TSLexemeMarkMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *ptr = lexemes;
+
+	while (ptr && ptr->lexeme)
+	{
+		ptr->flags |= TSL_MULTI;
+		ptr++;
+	}
+}
+
+/*
+ * Lexemes set operations
+ */
+
+/*
+ * Combine left and right lexeme lists into one.
+ * If append is true, right lexemes added after last left lexeme with TSL_ADDPOS flag
+ */
+static TSLexeme *
+TSLexemeUnionOpt(TSLexeme *left, TSLexeme *right, bool append)
+{
+	TSLexeme   *result;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+	int			left_max_nvariant = 0;
+	int			i;
+
+	if (left == NULL && right == NULL)
+	{
+		result = NULL;
+	}
+	else
+	{
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		if (left_size > 0)
+			memcpy(result, left, sizeof(TSLexeme) * left_size);
+		if (right_size > 0)
+			memcpy(result + left_size, right, sizeof(TSLexeme) * right_size);
+		if (append && left_size > 0 && right_size > 0)
+			result[left_size].flags |= TSL_ADDPOS;
+
+		for (i = left_size; i < left_size + right_size; i++)
+			result[i].nvariant += left_max_nvariant;
+	}
+
+	return result;
+}
+
+static TSLexeme *
+TSLexemeUnion(TSLexeme *left, TSLexeme *right)
+{
+	return TSLexemeUnionOpt(left, right, false);
+}
+
+static TSLexeme *
+TSLexemeExcept(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+		{
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+		}
+
+		if (!found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+static TSLexeme *
+TSLexemeIntersect(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+		{
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+		}
+
+		if (found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*
+ * Result storage functions
+ */
+
+static void
+ResultStorageAdd(ResultStorage *storage, ParsedLex *token, TSLexeme *lexs)
+{
+	TSLexeme   *oldLexs = storage->lexemes;
+
+	storage->lexemes = TSLexemeUnionOpt(storage->lexemes, lexs, true);
+	if (oldLexs)
+		pfree(oldLexs);
+}
+
+static void
+ResultStorageMoveToAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+	{
+		TSLexeme   *prevAccepted = storage->accepted;
+
+		storage->accepted = TSLexemeUnionOpt(storage->accepted, storage->lexemes, true);
+		if (prevAccepted)
+			pfree(prevAccepted);
+		if (storage->lexemes)
+			pfree(storage->lexemes);
+	}
+	else
+	{
+		storage->accepted = storage->lexemes;
+	}
+	storage->lexemes = NULL;
+}
+
+static void
+ResultStorageClearLexemes(ResultStorage *storage)
+{
+	if (storage->lexemes)
+		pfree(storage->lexemes);
+	storage->lexemes = NULL;
+}
+
+static void
+ResultStorageClear(ResultStorage *storage)
+{
+	ResultStorageClearLexemes(storage);
+
+	if (storage->accepted)
+		pfree(storage->accepted);
+	storage->accepted = NULL;
+}
+
+/*
+ * Condition and command execution
+ */
+
+static TSLexeme *
+LexizeExecDictionary(LexizeData *ld, ParsedLex *token, Oid dictId)
+{
+	TSLexeme   *res;
+	TSDictionaryCacheEntry *dict;
+	DictSubState subState;
+
+	if (ld->skipDictionary == dictId)
+		return NULL;
+
+	if (LexemesBufferContains(&ld->buffer, dictId, token))
+	{
+		res = LexemesBufferGet(&ld->buffer, dictId, token);
+	}
+	else
+	{
+		char	   *curValLemm = token->lemm;
+		int			curValLenLemm = token->lenlemm;
+		DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+		dict = lookup_ts_dictionary_cache(dictId);
+
+		if (state)
+		{
+			subState = state->subState;
+			state->processed = true;
+		}
+		else
+		{
+			subState.isend = subState.getnext = false;
+			subState.private_state = NULL;
+		}
+
+		res = (TSLexeme *) DatumGetPointer(FunctionCall4(
+														 &(dict->lexize),
+														 PointerGetDatum(dict->dictData),
+														 PointerGetDatum(curValLemm),
+														 Int32GetDatum(curValLenLemm),
+														 PointerGetDatum(&subState)
+														 ));
+
+
+		if (subState.getnext)
+		{
+			/*
+			 * Dictionary wants next word, so store current context and state
+			 * in the DictStateList
+			 */
+			if (state == NULL)
+			{
+				state = palloc0(sizeof(DictState));
+				state->processed = true;
+				state->relatedDictionary = dictId;
+				state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				state->acceptedTokens.head = state->acceptedTokens.tail = NULL;
+				state->tmpResult = NULL;
+
+				/*
+				 * Add state to the list and update pointer in order to work
+				 * with copy from the list
+				 */
+				state = DictStateListAdd(&ld->dslist, state);
+			}
+
+			state->subState = subState;
+			state->storeToAccepted = res != NULL;
+
+			if (res)
+			{
+				if (state->intermediateTokens.head != NULL)
+				{
+					ParsedLex  *ptr = state->intermediateTokens.head;
+
+					while (ptr)
+					{
+						LPLAddTailCopy(&state->acceptedTokens, ptr);
+						ptr = ptr->next;
+					}
+					state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				}
+
+				if (state->tmpResult)
+					pfree(state->tmpResult);
+				TSLexemeMarkMulti(res);
+				state->tmpResult = res;
+				res = NULL;
+			}
+		}
+		else if (state != NULL)
+		{
+			if (res)
+			{
+				if (state)
+					TSLexemeMarkMulti(res);
+				DictStateListRemove(&ld->dslist, dictId);
+			}
+			else
+			{
+				/*
+				 * Trigger post-processing in order to check tmpResult and
+				 * restart processing (see LexizeExec function)
+				 */
+				state->processed = false;
+			}
+		}
+		LexemesBufferAdd(&ld->buffer, dictId, token, res);
+	}
+
+	return res;
+}
+
+static bool
+LexizeExecDictionaryWaitNext(LexizeData *ld, Oid dictId)
+{
+	DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+	if (state)
+		return state->subState.getnext;
+	else
+		return false;
+}
+
+static bool
+LexizeExecIsNull(LexizeData *ld, ParsedLex *token, Oid dictId)
+{
+	TSLexeme   *lexemes = LexizeExecDictionary(ld, token, dictId);
+
+	if (lexemes)
+		return false;
+	else
+		return !LexizeExecDictionaryWaitNext(ld, dictId);
+}
+
+static bool
+LexizeExecIsStop(LexizeData *ld, ParsedLex *token, Oid dictId)
+{
+	TSLexeme   *lex = LexizeExecDictionary(ld, token, dictId);
+
+	return lex != NULL && lex[0].lexeme == NULL;
+}
+
+static bool
+LexizeExecExpressionBool(LexizeData *ld, ParsedLex *token, TSMapExpression *expression)
+{
+	bool		result;
+
+	if (expression == NULL)
+		result = false;
+	else if (expression->is_true)
+		result = true;
+	else if (expression->dictionary != InvalidOid)
+	{
+		bool		is_null = LexizeExecIsNull(ld, token, expression->dictionary);
+		bool		is_stop = LexizeExecIsStop(ld, token, expression->dictionary);
+		bool		invert = (expression->options & DICTMAP_OPT_NOT) != 0;
+
+		result = true;
+		if ((expression->options & DICTMAP_OPT_IS_NULL) != 0)
+			result = result && (invert ? !is_null : is_null);
+		if ((expression->options & DICTMAP_OPT_IS_STOP) != 0)
+			result = result && (invert ? !is_stop : is_stop);
+	}
+	else
+	{
+		if (expression->operator == DICTMAP_OP_MAPBY)
+		{
+			TSLexeme   *mapby_result = LexizeExecMapBy(ld, token, expression->left, expression->right);
+			bool		is_null = mapby_result == NULL;
+			bool		is_stop = mapby_result != NULL && mapby_result[0].lexeme == NULL;
+			bool		invert = (expression->options & DICTMAP_OPT_NOT) != 0;
+
+			if (expression->left->dictionary != InvalidOid && LexizeExecDictionaryWaitNext(ld, expression->left->dictionary))
+				is_null = false;
+
+			result = true;
+			if ((expression->options & DICTMAP_OPT_IS_NULL) != 0)
+				result = result && (invert ? !is_null : is_null);
+			if ((expression->options & DICTMAP_OPT_IS_STOP) != 0)
+				result = result && (invert ? !is_stop : is_stop);
+		}
+		else
+		{
+			bool		res_left = LexizeExecExpressionBool(ld, token, expression->left);
+			bool		res_right = LexizeExecExpressionBool(ld, token, expression->right);
+
+			switch (expression->operator)
+			{
+				case DICTMAP_OP_NOT:
+					result = !res_right;
+					break;
+				case DICTMAP_OP_OR:
+					result = res_left || res_right;
+					break;
+				case DICTMAP_OP_AND:
+					result = res_left && res_right;
+					break;
+				default:
+					ereport(ERROR,
+							(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+							 errmsg("invalid text search configuration boolean expression")));
+					break;
+			}
+		}
+	}
+
+	return result;
+}
+
+static TSLexeme *
+LexizeExecExpressionSet(LexizeData *ld, ParsedLex *token, TSMapExpression *expression)
+{
+	TSLexeme   *result;
+
+	if (expression->dictionary != InvalidOid)
+	{
+		result = LexizeExecDictionary(ld, token, expression->dictionary);
+	}
+	else
+	{
+		if (expression->operator == DICTMAP_OP_MAPBY)
+		{
+			result = LexizeExecMapBy(ld, token, expression->left, expression->right);
+		}
+		else
+		{
+			TSLexeme   *res_left = LexizeExecExpressionSet(ld, token, expression->left);
+			TSLexeme   *res_right = LexizeExecExpressionSet(ld, token, expression->right);
+
+			switch (expression->operator)
+			{
+				case DICTMAP_OP_UNION:
+					result = TSLexemeUnion(res_left, res_right);
+					break;
+				case DICTMAP_OP_EXCEPT:
+					result = TSLexemeExcept(res_left, res_right);
+					break;
+				case DICTMAP_OP_INTERSECT:
+					result = TSLexemeIntersect(res_left, res_right);
+					break;
+				default:
+					ereport(ERROR,
+							(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+							 errmsg("invalid text search configuration result set expression")));
+					result = NULL;
+					break;
+			}
+		}
+	}
+
+	return result;
+}
 
-typedef struct
+static TSLexeme *
+LexizeExecMapBy(LexizeData *ld, ParsedLex *token, TSMapExpression *left, TSMapExpression *right)
 {
-	TSConfigCacheEntry *cfg;
-	Oid			curDictId;
-	int			posDict;
-	DictSubState dictState;
-	ParsedLex  *curSub;
-	ListParsedLex towork;		/* current list to work */
-	ListParsedLex waste;		/* list of lexemes that already lexized */
+	TSLexeme   *right_res = LexizeExecExpressionSet(ld, token, right);
+	TSLexeme   *result = NULL;
+	int			right_size = TSLexemeGetSize(right_res);
+	int			i;
 
-	/*
-	 * fields to store last variant to lexize (basically, thesaurus or similar
-	 * to, which wants	several lexemes
-	 */
+	if (right_res == NULL)
+		return LexizeExecExpressionSet(ld, token, left);
 
-	ParsedLex  *lastRes;
-	TSLexeme   *tmpRes;
-} LexizeData;
+	for (i = 0; i < right_size; i++)
+	{
+		TSLexeme   *tmp_res = NULL;
+		TSLexeme   *prev_res;
+		ParsedLex	tmp_token;
+
+		tmp_token.lemm = right_res[i].lexeme;
+		tmp_token.lenlemm = strlen(right_res[i].lexeme);
+		tmp_token.type = token->type;
+		tmp_token.next = NULL;
+
+		tmp_res = LexizeExecExpressionSet(ld, &tmp_token, left);
+		prev_res = result;
+		result = TSLexemeUnion(prev_res, tmp_res);
+		if (prev_res)
+			pfree(prev_res);
+	}
 
-static void
-LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
-{
-	ld->cfg = cfg;
-	ld->curDictId = InvalidOid;
-	ld->posDict = 0;
-	ld->towork.head = ld->towork.tail = ld->curSub = NULL;
-	ld->waste.head = ld->waste.tail = NULL;
-	ld->lastRes = NULL;
-	ld->tmpRes = NULL;
+	return result;
 }
 
-static void
-LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
+static TSLexeme *
+LexizeExecCase(LexizeData *ld, ParsedLex *originalToken, TSMapRuleList *rules, TSMapRule **selectedRule)
 {
-	if (list->tail)
+	TSLexeme   *res = NULL;
+	ParsedLex	token = *originalToken;
+
+	if (ld->cfg->lenmap <= token.type || rules == NULL)
 	{
-		list->tail->next = newpl;
-		list->tail = newpl;
+		res = NULL;
 	}
 	else
-		list->head = list->tail = newpl;
-	newpl->next = NULL;
-}
-
-static ParsedLex *
-LPLRemoveHead(ListParsedLex *list)
-{
-	ParsedLex  *res = list->head;
+	{
+		int			i;
 
-	if (list->head)
-		list->head = list->head->next;
+		for (i = 0; i < rules->count; i++)
+		{
+			if (rules->data[i].dictionary != InvalidOid)
+			{
+				/* Comma-separated syntax configuration */
+				res = LexizeExecDictionary(ld, &token, rules->data[i].dictionary);
+				if (!LexizeExecIsNull(ld, &token, rules->data[i].dictionary))
+				{
+					if (selectedRule)
+						*selectedRule = rules->data + i;
+					originalToken->relatedRule = rules->data + i;
+
+					if (res && (res[0].flags & TSL_FILTER))
+					{
+						token.lemm = res[0].lexeme;
+						token.lenlemm = strlen(res[0].lexeme);
+					}
+					else
+					{
+						break;
+					}
+				}
+			}
+			else if (LexizeExecExpressionBool(ld, &token, rules->data[i].condition.expression))
+			{
+				if (selectedRule)
+					*selectedRule = rules->data + i;
+				originalToken->relatedRule = rules->data + i;
 
-	if (list->head == NULL)
-		list->tail = NULL;
+				if (rules->data[i].command.is_expression)
+					res = LexizeExecExpressionSet(ld, &token, rules->data[i].command.expression);
+				else
+					res = LexizeExecCase(ld, &token, rules->data[i].command.ruleList, selectedRule);
+				break;
+			}
+		}
+	}
 
 	return res;
 }
 
-static void
-LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
+/*
+ * LexizeExec and helpers functions
+ */
+
+static TSLexeme *
+LexizeExecFinishProcessing(LexizeData *ld)
 {
-	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+	int			i;
+	TSLexeme   *res = NULL;
 
-	newpl->type = type;
-	newpl->lemm = lemm;
-	newpl->lenlemm = lenlemm;
-	LPLAddTail(&ld->towork, newpl);
-	ld->curSub = ld->towork.tail;
-}
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		TSLexeme   *last_res = res;
 
-static void
-RemoveHead(LexizeData *ld)
-{
-	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+		res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+		if (last_res)
+			pfree(last_res);
+	}
 
-	ld->posDict = 0;
+	return res;
 }
 
-static void
-setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+static TSLexeme *
+LexizeExecGetPreviousResults(LexizeData *ld)
 {
-	if (correspondLexem)
-	{
-		*correspondLexem = ld->waste.head;
-	}
-	else
-	{
-		ParsedLex  *tmp,
-				   *ptr = ld->waste.head;
+	int			i;
+	TSLexeme   *res = NULL;
 
-		while (ptr)
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		if (!ld->dslist.states[i].processed)
 		{
-			tmp = ptr->next;
-			pfree(ptr);
-			ptr = tmp;
+			TSLexeme   *last_res = res;
+
+			res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+			if (last_res)
+				pfree(last_res);
 		}
 	}
-	ld->waste.head = ld->waste.tail = NULL;
+
+	return res;
 }
 
 static void
-moveToWaste(LexizeData *ld, ParsedLex *stop)
+LexizeExecClearDictStates(LexizeData *ld)
 {
-	bool		go = true;
+	int			i;
 
-	while (ld->towork.head && go)
+	for (i = 0; i < ld->dslist.listLength; i++)
 	{
-		if (ld->towork.head == stop)
+		if (!ld->dslist.states[i].processed)
 		{
-			ld->curSub = stop->next;
-			go = false;
+			DictStateListRemove(&ld->dslist, ld->dslist.states[i].relatedDictionary);
+			i = 0;
 		}
-		RemoveHead(ld);
 	}
 }
 
-static void
-setNewTmpRes(LexizeData *ld, ParsedLex *lex, TSLexeme *res)
+static bool
+LexizeExecNotProcessedDictStates(LexizeData *ld)
 {
-	if (ld->tmpRes)
-	{
-		TSLexeme   *ptr;
+	int			i;
 
-		for (ptr = ld->tmpRes; ptr->lexeme; ptr++)
-			pfree(ptr->lexeme);
-		pfree(ld->tmpRes);
-	}
-	ld->tmpRes = res;
-	ld->lastRes = lex;
+	for (i = 0; i < ld->dslist.listLength; i++)
+		if (!ld->dslist.states[i].processed)
+			return true;
+
+	return false;
 }
 
 static TSLexeme *
-LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
+LexizeExec(LexizeData *ld, ParsedLex **correspondLexem, TSMapRule **selectedRule)
 {
+	ParsedLex  *token;
+	TSMapRuleList *rules;
+	TSLexeme   *res = NULL;
+	TSLexeme   *prevIterationResult = NULL;
+	bool		removeHead = false;
+	bool		resetSkipDictionary = false;
+	bool		accepted = false;
 	int			i;
-	ListDictionary *map;
-	TSDictionaryCacheEntry *dict;
-	TSLexeme   *res;
 
-	if (ld->curDictId == InvalidOid)
-	{
-		/*
-		 * usual mode: dictionary wants only one word, but we should keep in
-		 * mind that we should go through all stack
-		 */
+	for (i = 0; i < ld->dslist.listLength; i++)
+		ld->dslist.states[i].processed = false;
+	if (ld->skipDictionary != InvalidOid)
+		resetSkipDictionary = true;
 
-		while (ld->towork.head)
+	token = ld->towork.head;
+	if (token == NULL)
+	{
+		setCorrLex(ld, correspondLexem);
+		return NULL;
+	}
+	else
+	{
+		rules = ld->cfg->map[token->type];
+		if (rules != NULL)
 		{
-			ParsedLex  *curVal = ld->towork.head;
-			char	   *curValLemm = curVal->lemm;
-			int			curValLenLemm = curVal->lenlemm;
-
-			map = ld->cfg->map + curVal->type;
-
-			if (curVal->type == 0 || curVal->type >= ld->cfg->lenmap || map->len == 0)
+			res = LexizeExecCase(ld, token, rules, selectedRule);
+			prevIterationResult = LexizeExecGetPreviousResults(ld);
+			removeHead = prevIterationResult == NULL;
+		}
+		else
+		{
+			removeHead = true;
+			if (token->type == 0)	/* Processing EOF-like token */
 			{
-				/* skip this type of lexeme */
-				RemoveHead(ld);
-				continue;
+				res = LexizeExecFinishProcessing(ld);
+				prevIterationResult = NULL;
 			}
+		}
+
+		if (LexizeExecNotProcessedDictStates(ld) && (token->type == 0 || rules != NULL))	/* Rollback processing */
+		{
+			int			i;
+			ListParsedLex *intermediateTokens = NULL;
+			ListParsedLex *acceptedTokens = NULL;
 
-			for (i = ld->posDict; i < map->len; i++)
+			for (i = 0; i < ld->dslist.listLength; i++)
 			{
-				dict = lookup_ts_dictionary_cache(map->dictIds[i]);
-
-				ld->dictState.isend = ld->dictState.getnext = false;
-				ld->dictState.private_state = NULL;
-				res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-																 &(dict->lexize),
-																 PointerGetDatum(dict->dictData),
-																 PointerGetDatum(curValLemm),
-																 Int32GetDatum(curValLenLemm),
-																 PointerGetDatum(&ld->dictState)
-																 ));
-
-				if (ld->dictState.getnext)
+				if (!ld->dslist.states[i].processed)
 				{
-					/*
-					 * dictionary wants next word, so setup and store current
-					 * position and go to multiword mode
-					 */
-
-					ld->curDictId = DatumGetObjectId(map->dictIds[i]);
-					ld->posDict = i + 1;
-					ld->curSub = curVal->next;
-					if (res)
-						setNewTmpRes(ld, curVal, res);
-					return LexizeExec(ld, correspondLexem);
+					intermediateTokens = &ld->dslist.states[i].intermediateTokens;
+					acceptedTokens = &ld->dslist.states[i].acceptedTokens;
+					if (prevIterationResult == NULL)
+						ld->skipDictionary = ld->dslist.states[i].relatedDictionary;
 				}
+			}
 
-				if (!res)		/* dictionary doesn't know this lexeme */
-					continue;
-
-				if (res->flags & TSL_FILTER)
+			if (intermediateTokens && intermediateTokens->head)
+			{
+				ParsedLex  *head = ld->towork.head;
+
+				ld->towork.head = intermediateTokens->head;
+				intermediateTokens->tail->next = head;
+				head->next = NULL;
+				ld->towork.tail = head;
+				removeHead = false;
+				LPLClear(&ld->waste);
+				if (acceptedTokens && acceptedTokens->head)
 				{
-					curValLemm = res->lexeme;
-					curValLenLemm = strlen(res->lexeme);
-					continue;
+					ld->waste.head = acceptedTokens->head;
+					ld->waste.tail = acceptedTokens->tail;
 				}
-
-				RemoveHead(ld);
-				setCorrLex(ld, correspondLexem);
-				return res;
 			}
-
-			RemoveHead(ld);
+			ResultStorageClearLexemes(&ld->delayedResults);
+			if (rules != NULL)
+				res = NULL;
 		}
+
+		if (rules != NULL)
+			LexizeExecClearDictStates(ld);
+		else if (token->type == 0)
+			DictStateListClear(&ld->dslist);
 	}
-	else
-	{							/* curDictId is valid */
-		dict = lookup_ts_dictionary_cache(ld->curDictId);
 
-		/*
-		 * Dictionary ld->curDictId asks  us about following words
-		 */
+	if (prevIterationResult)
+	{
+		res = prevIterationResult;
+	}
+	else
+	{
+		int			i;
 
-		while (ld->curSub)
+		for (i = 0; i < ld->dslist.listLength; i++)
 		{
-			ParsedLex  *curVal = ld->curSub;
-
-			map = ld->cfg->map + curVal->type;
-
-			if (curVal->type != 0)
+			if (ld->dslist.states[i].storeToAccepted)
 			{
-				bool		dictExists = false;
-
-				if (curVal->type >= ld->cfg->lenmap || map->len == 0)
-				{
-					/* skip this type of lexeme */
-					ld->curSub = curVal->next;
-					continue;
-				}
-
-				/*
-				 * We should be sure that current type of lexeme is recognized
-				 * by our dictionary: we just check is it exist in list of
-				 * dictionaries ?
-				 */
-				for (i = 0; i < map->len && !dictExists; i++)
-					if (ld->curDictId == DatumGetObjectId(map->dictIds[i]))
-						dictExists = true;
-
-				if (!dictExists)
-				{
-					/*
-					 * Dictionary can't work with current tpe of lexeme,
-					 * return to basic mode and redo all stored lexemes
-					 */
-					ld->curDictId = InvalidOid;
-					return LexizeExec(ld, correspondLexem);
-				}
+				LPLAddTailCopy(&ld->dslist.states[i].acceptedTokens, token);
+				accepted = true;
+				ld->dslist.states[i].storeToAccepted = false;
 			}
-
-			ld->dictState.isend = (curVal->type == 0) ? true : false;
-			ld->dictState.getnext = false;
-
-			res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-															 &(dict->lexize),
-															 PointerGetDatum(dict->dictData),
-															 PointerGetDatum(curVal->lemm),
-															 Int32GetDatum(curVal->lenlemm),
-															 PointerGetDatum(&ld->dictState)
-															 ));
-
-			if (ld->dictState.getnext)
+			else
 			{
-				/* Dictionary wants one more */
-				ld->curSub = curVal->next;
-				if (res)
-					setNewTmpRes(ld, curVal, res);
-				continue;
+				LPLAddTailCopy(&ld->dslist.states[i].intermediateTokens, token);
 			}
+		}
+	}
 
-			if (res || ld->tmpRes)
-			{
-				/*
-				 * Dictionary normalizes lexemes, so we remove from stack all
-				 * used lexemes, return to basic mode and redo end of stack
-				 * (if it exists)
-				 */
-				if (res)
-				{
-					moveToWaste(ld, ld->curSub);
-				}
-				else
-				{
-					res = ld->tmpRes;
-					moveToWaste(ld, ld->lastRes);
-				}
+	if (removeHead)
+		RemoveHead(ld);
 
-				/* reset to initial state */
-				ld->curDictId = InvalidOid;
-				ld->posDict = 0;
-				ld->lastRes = NULL;
-				ld->tmpRes = NULL;
-				setCorrLex(ld, correspondLexem);
-				return res;
-			}
+	if (ld->dslist.listLength > 0)
+	{
+		/*
+		 * There is at least one thesaurus dictionary in the middle of
+		 * processing. Delay return of the result to avoid wrong lexemes in
+		 * case of thesaurus phrase rejection.
+		 */
+		ResultStorageAdd(&ld->delayedResults, token, res);
+		if (accepted)
+			ResultStorageMoveToAccepted(&ld->delayedResults);
+		if (res)
+			pfree(res);
+		res = NULL;
+	}
+	else
+	{
+		if (ld->towork.head == NULL)
+		{
+			TSLexeme   *oldAccepted = ld->delayedResults.accepted;
 
-			/*
-			 * Dict don't want next lexem and didn't recognize anything, redo
-			 * from ld->towork.head
-			 */
-			ld->curDictId = InvalidOid;
-			return LexizeExec(ld, correspondLexem);
+			ld->delayedResults.accepted = TSLexemeUnionOpt(ld->delayedResults.accepted, ld->delayedResults.lexemes, true);
+			if (oldAccepted)
+				pfree(oldAccepted);
+		}
+
+		/*
+		 * Add accepted delayed results to the output of the parsing. All
+		 * lexemes returned during thesaurus pharse processing should be
+		 * returned simultaniously, since all phrase tokens are processed as
+		 * one.
+		 */
+		if (ld->delayedResults.accepted != NULL)
+		{
+			TSLexeme   *oldRes = res;
+
+			res = TSLexemeUnionOpt(ld->delayedResults.accepted, res, prevIterationResult == NULL);
+			if (oldRes)
+				pfree(oldRes);
+			ResultStorageClear(&ld->delayedResults);
 		}
+		setCorrLex(ld, correspondLexem);
 	}
 
-	setCorrLex(ld, correspondLexem);
-	return NULL;
+	if (resetSkipDictionary)
+		ld->skipDictionary = InvalidOid;
+
+	LexemesBufferClear(&ld->buffer);
+	res = TSLexemeFilterMulti(res);
+	if (res)
+		res = TSLexemeRemoveDuplications(res);
+
+	return res;
 }
 
 /*
+ * ts_parse API functions
+ */
+
+/*
  * Parse string and lexize words.
  *
  * prs will be filled in.
@@ -357,7 +1329,7 @@ LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 void
 parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
@@ -375,36 +1347,42 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
-		while ((norms = LexizeExec(&ldata, NULL)) != NULL)
+		while ((norms = LexizeExec(&ldata, NULL, NULL)) != NULL)
 		{
-			TSLexeme   *ptr = norms;
+			TSLexeme   *ptr;
+
+			ptr = norms;
 
 			prs->pos++;			/* set pos */
 
@@ -429,12 +1407,200 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 			}
 			pfree(norms);
 		}
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
 
 /*
+ * Initialize SRF context and text parser for ts_debug execution.
+ */
+static void
+ts_debug_init(Oid cfgId, text *inputText, FunctionCallInfo fcinfo)
+{
+	TupleDesc	tupdesc;
+	char	   *buf;
+	int			buflen;
+	FuncCallContext *funcctx;
+	MemoryContext oldcontext;
+	TSDebugContext *context;
+
+	funcctx = SRF_FIRSTCALL_INIT();
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+	buf = text_to_cstring(inputText);
+	buflen = strlen(buf);
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("function returning record called in context "
+						"that cannot accept type record")));
+
+	funcctx->user_fctx = palloc0(sizeof(TSDebugContext));
+	funcctx->attinmeta = TupleDescGetAttInMetadata(tupdesc);
+
+	context = funcctx->user_fctx;
+	context->cfg = lookup_ts_config_cache(cfgId);
+	context->prsobj = lookup_ts_parser_cache(context->cfg->prsId);
+
+	context->tokenTypes = (LexDescr *) DatumGetPointer(OidFunctionCall1(context->prsobj->lextypeOid,
+																		(Datum) 0));
+
+	context->prsdata = (void *) DatumGetPointer(FunctionCall2(&context->prsobj->prsstart,
+															  PointerGetDatum(buf),
+															  Int32GetDatum(buflen)));
+	LexizeInit(&context->ldata, context->cfg);
+	context->tokentype = 1;
+
+	MemoryContextSwitchTo(oldcontext);
+}
+
+/*
+ * Get one token from input text and add it to towork queue.
+ */
+static void
+ts_debug_get_token(FuncCallContext *funcctx)
+{
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+	int			lenlemm;
+	char	   *lemm = NULL;
+
+	context = funcctx->user_fctx;
+
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+	context->tokentype = DatumGetInt32(FunctionCall3(&(context->prsobj->prstoken),
+													 PointerGetDatum(context->prsdata),
+													 PointerGetDatum(&lemm),
+													 PointerGetDatum(&lenlemm)));
+
+	if (context->tokentype > 0 && lenlemm >= MAXSTRLEN)
+	{
+#ifdef IGNORE_LONGLEXEME
+		ereport(NOTICE,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#else
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#endif
+	}
+
+	LexizeAddLemm(&context->ldata, context->tokentype, lemm, lenlemm);
+	MemoryContextSwitchTo(oldcontext);
+}
+
+/*
+ * Parse text and print debug information for each token, such as
+ * token type, dictionary map configuration, selected command and lexemes.
+ * Arguments: regconfiguration(Oid) cfgId, text *inputText
+ */
+Datum
+ts_debug(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		Oid			cfgId = PG_GETARG_OID(0);
+		text	   *inputText = PG_GETARG_TEXT_P(1);
+
+		ts_debug_init(cfgId, inputText, fcinfo);
+	}
+
+	funcctx = SRF_PERCALL_SETUP();
+	context = funcctx->user_fctx;
+
+	while (context->tokentype > 0 && context->leftTokens == NULL)
+	{
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+		ts_debug_get_token(funcctx);
+
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens), &(context->rule));
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	while (context->leftTokens == NULL && context->ldata.towork.head != NULL)
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens), &(context->rule));
+
+	if (context->leftTokens && context->leftTokens && context->leftTokens->type > 0)
+	{
+		HeapTuple	tuple;
+		Datum		result;
+		char	  **values;
+		ParsedLex  *lex = context->leftTokens;
+		StringInfo	str = NULL;
+		TSLexeme   *ptr;
+
+		values = palloc0(sizeof(char *) * 6);
+		str = makeStringInfo();
+		initStringInfo(str);
+
+		values[0] = context->tokenTypes[lex->type - 1].alias;
+		values[1] = context->tokenTypes[lex->type - 1].descr;
+
+		values[2] = palloc0(sizeof(char) * (lex->lenlemm + 1));
+		memcpy(values[2], lex->lemm, sizeof(char) * lex->lenlemm);
+
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			TSMapPrintRuleList(context->ldata.cfg->map[lex->type], str, 0);
+			values[3] = str->data;
+			str = makeStringInfo();
+			initStringInfo(str);
+
+			if (lex->relatedRule)
+			{
+				TSMapPrintRule(lex->relatedRule, str, 0);
+				values[4] = str->data;
+				str = makeStringInfo();
+				initStringInfo(str);
+			}
+		}
+
+		ptr = context->savedLexemes;
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '{');
+
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr != context->savedLexemes)
+				appendStringInfoString(str, ", ");
+			appendStringInfoString(str, ptr->lexeme);
+			ptr++;
+		}
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '}');
+		if (context->savedLexemes)
+			values[5] = str->data;
+		else
+			values[5] = NULL;
+
+		tuple = BuildTupleFromCStrings(funcctx->attinmeta, values);
+		result = HeapTupleGetDatum(tuple);
+
+		context->leftTokens = lex->next;
+		pfree(lex);
+		if (context->leftTokens == NULL && context->savedLexemes)
+			pfree(context->savedLexemes);
+
+		SRF_RETURN_NEXT(funcctx, result);
+	}
+
+	FunctionCall1(&(context->prsobj->prsend), PointerGetDatum(context->prsdata));
+	SRF_RETURN_DONE(funcctx);
+}
+
+/*
  * Headline framework
  */
 static void
@@ -532,12 +1698,12 @@ addHLParsedLex(HeadlineParsedText *prs, TSQuery query, ParsedLex *lexs, TSLexeme
 void
 hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
 	TSLexeme   *norms;
-	ParsedLex  *lexs;
+	ParsedLex  *lexs = NULL;
 	TSConfigCacheEntry *cfg;
 	TSParserCacheEntry *prsobj;
 	void	   *prsdata;
@@ -551,45 +1717,50 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		do
 		{
-			if ((norms = LexizeExec(&ldata, &lexs)) != NULL)
+			if ((norms = LexizeExec(&ldata, &lexs, NULL)) != NULL)
 			{
 				prs->vectorpos++;
 				addHLParsedLex(prs, query, lexs, norms);
 			}
 			else
 				addHLParsedLex(prs, query, lexs, NULL);
+			lexs = NULL;
 		} while (norms);
 
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
@@ -642,14 +1813,14 @@ generateHeadline(HeadlineParsedText *prs)
 			}
 			else if (!wrd->skip)
 			{
-				if (wrd->selected)
+				if (wrd->selected && (wrd == prs->words || !(wrd - 1)->selected))
 				{
 					memcpy(ptr, prs->startsel, prs->startsellen);
 					ptr += prs->startsellen;
 				}
 				memcpy(ptr, wrd->word, wrd->len);
 				ptr += wrd->len;
-				if (wrd->selected)
+				if (wrd->selected && ((wrd + 1 - prs->words) == prs->curwords || !(wrd + 1)->selected))
 				{
 					memcpy(ptr, prs->stopsel, prs->stopsellen);
 					ptr += prs->stopsellen;
diff --git a/src/backend/tsearch/ts_utils.c b/src/backend/tsearch/ts_utils.c
index 56d4cf0..3868b3c 100644
--- a/src/backend/tsearch/ts_utils.c
+++ b/src/backend/tsearch/ts_utils.c
@@ -19,7 +19,17 @@
 #include "miscadmin.h"
 #include "tsearch/ts_locale.h"
 #include "tsearch/ts_utils.h"
-
+#include "catalog/indexing.h"
+#include "catalog/pg_ts_config_map.h"
+#include "catalog/pg_ts_dict.h"
+#include "storage/lockdefs.h"
+#include "access/heapam.h"
+#include "access/genam.h"
+#include "access/htup_details.h"
+#include "access/sysattr.h"
+#include "utils/fmgroids.h"
+#include "utils/builtins.h"
+#include "tsearch/ts_cache.h"
 
 /*
  * Given the base name and extension of a tsearch config file, return
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 888edbb..0628b9c 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -828,11 +828,10 @@ static const struct cachedesc cacheinfo[] = {
 	},
 	{TSConfigMapRelationId,		/* TSCONFIGMAP */
 		TSConfigMapIndexId,
-		3,
+		2,
 		{
 			Anum_pg_ts_config_map_mapcfg,
 			Anum_pg_ts_config_map_maptokentype,
-			Anum_pg_ts_config_map_mapseqno,
 			0
 		},
 		2
diff --git a/src/backend/utils/cache/ts_cache.c b/src/backend/utils/cache/ts_cache.c
index da5c8ea..da18387 100644
--- a/src/backend/utils/cache/ts_cache.c
+++ b/src/backend/utils/cache/ts_cache.c
@@ -39,10 +39,13 @@
 #include "catalog/pg_ts_template.h"
 #include "commands/defrem.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/catcache.h"
 #include "utils/fmgroids.h"
 #include "utils/inval.h"
+#include "utils/jsonb.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/regproc.h"
@@ -51,13 +54,12 @@
 
 
 /*
- * MAXTOKENTYPE/MAXDICTSPERTT are arbitrary limits on the workspace size
+ * MAXTOKENTYPE is arbitrary limits on the workspace size
  * used in lookup_ts_config_cache().  We could avoid hardwiring a limit
  * by making the workspace dynamically enlargeable, but it seems unlikely
  * to be worth the trouble.
  */
-#define MAXTOKENTYPE	256
-#define MAXDICTSPERTT	100
+#define MAXTOKENTYPE		256
 
 
 static HTAB *TSParserCacheHash = NULL;
@@ -414,11 +416,10 @@ lookup_ts_config_cache(Oid cfgId)
 		ScanKeyData mapskey;
 		SysScanDesc mapscan;
 		HeapTuple	maptup;
-		ListDictionary maplists[MAXTOKENTYPE + 1];
-		Oid			mapdicts[MAXDICTSPERTT];
+		TSMapRuleList *mapruleslist[MAXTOKENTYPE + 1];
 		int			maxtokentype;
-		int			ndicts;
 		int			i;
+		TSMapRuleList *rules_tmp;
 
 		tp = SearchSysCache1(TSCONFIGOID, ObjectIdGetDatum(cfgId));
 		if (!HeapTupleIsValid(tp))
@@ -449,8 +450,10 @@ lookup_ts_config_cache(Oid cfgId)
 			if (entry->map)
 			{
 				for (i = 0; i < entry->lenmap; i++)
-					if (entry->map[i].dictIds)
-						pfree(entry->map[i].dictIds);
+				{
+					if (entry->map[i])
+						TSMapFree(entry->map[i]);
+				}
 				pfree(entry->map);
 			}
 		}
@@ -464,13 +467,11 @@ lookup_ts_config_cache(Oid cfgId)
 		/*
 		 * Scan pg_ts_config_map to gather dictionary list for each token type
 		 *
-		 * Because the index is on (mapcfg, maptokentype, mapseqno), we will
-		 * see the entries in maptokentype order, and in mapseqno order for
-		 * each token type, even though we didn't explicitly ask for that.
+		 * Because the index is on (mapcfg, maptokentype), we will see the
+		 * entries in maptokentype order even though we didn't explicitly ask
+		 * for that.
 		 */
-		MemSet(maplists, 0, sizeof(maplists));
 		maxtokentype = 0;
-		ndicts = 0;
 
 		ScanKeyInit(&mapskey,
 					Anum_pg_ts_config_map_mapcfg,
@@ -482,6 +483,7 @@ lookup_ts_config_cache(Oid cfgId)
 		mapscan = systable_beginscan_ordered(maprel, mapidx,
 											 NULL, 1, &mapskey);
 
+		memset(mapruleslist, 0, sizeof(mapruleslist));
 		while ((maptup = systable_getnext_ordered(mapscan, ForwardScanDirection)) != NULL)
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
@@ -491,51 +493,27 @@ lookup_ts_config_cache(Oid cfgId)
 				elog(ERROR, "maptokentype value %d is out of range", toktype);
 			if (toktype < maxtokentype)
 				elog(ERROR, "maptokentype entries are out of order");
-			if (toktype > maxtokentype)
-			{
-				/* starting a new token type, but first save the prior data */
-				if (ndicts > 0)
-				{
-					maplists[maxtokentype].len = ndicts;
-					maplists[maxtokentype].dictIds = (Oid *)
-						MemoryContextAlloc(CacheMemoryContext,
-										   sizeof(Oid) * ndicts);
-					memcpy(maplists[maxtokentype].dictIds, mapdicts,
-						   sizeof(Oid) * ndicts);
-				}
-				maxtokentype = toktype;
-				mapdicts[0] = cfgmap->mapdict;
-				ndicts = 1;
-			}
-			else
-			{
-				/* continuing data for current token type */
-				if (ndicts >= MAXDICTSPERTT)
-					elog(ERROR, "too many pg_ts_config_map entries for one token type");
-				mapdicts[ndicts++] = cfgmap->mapdict;
-			}
+
+			maxtokentype = toktype;
+			rules_tmp = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			mapruleslist[maxtokentype] = TSMapMoveToMemoryContext(rules_tmp, CacheMemoryContext);
+			TSMapFree(rules_tmp);
+			rules_tmp = NULL;
 		}
 
 		systable_endscan_ordered(mapscan);
 		index_close(mapidx, AccessShareLock);
 		heap_close(maprel, AccessShareLock);
 
-		if (ndicts > 0)
+		if (maxtokentype > 0)
 		{
-			/* save the last token type's dictionaries */
-			maplists[maxtokentype].len = ndicts;
-			maplists[maxtokentype].dictIds = (Oid *)
-				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(Oid) * ndicts);
-			memcpy(maplists[maxtokentype].dictIds, mapdicts,
-				   sizeof(Oid) * ndicts);
-			/* and save the overall map */
+			/* save the overall map */
 			entry->lenmap = maxtokentype + 1;
-			entry->map = (ListDictionary *)
+			entry->map = (TSMapRuleList * *)
 				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(ListDictionary) * entry->lenmap);
-			memcpy(entry->map, maplists,
-				   sizeof(ListDictionary) * entry->lenmap);
+								   sizeof(TSMapRuleList *) * entry->lenmap);
+			memcpy(entry->map, mapruleslist,
+				   sizeof(TSMapRuleList *) * entry->lenmap);
 		}
 
 		entry->isvalid = true;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 8733426..ceff4d1 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -14186,10 +14186,11 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 					  "SELECT\n"
 					  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
 					  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
-					  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
+					  "  dictionary_map_to_text(m.mapcfg, m.maptokentype) AS dictname\n"
 					  "FROM pg_catalog.pg_ts_config_map AS m\n"
 					  "WHERE m.mapcfg = '%u'\n"
-					  "ORDER BY m.mapcfg, m.maptokentype, m.mapseqno",
+					  "GROUP BY m.mapcfg, m.maptokentype\n"
+					  "ORDER BY m.mapcfg, m.maptokentype",
 					  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -14203,20 +14204,14 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 		char	   *tokenname = PQgetvalue(res, i, i_tokenname);
 		char	   *dictname = PQgetvalue(res, i, i_dictname);
 
-		if (i == 0 ||
-			strcmp(tokenname, PQgetvalue(res, i - 1, i_tokenname)) != 0)
-		{
-			/* starting a new token type, so start a new command */
-			if (i > 0)
-				appendPQExpBufferStr(q, ";\n");
-			appendPQExpBuffer(q, "\nALTER TEXT SEARCH CONFIGURATION %s\n",
-							  fmtId(cfginfo->dobj.name));
-			/* tokenname needs quoting, dictname does NOT */
-			appendPQExpBuffer(q, "    ADD MAPPING FOR %s WITH %s",
-							  fmtId(tokenname), dictname);
-		}
-		else
-			appendPQExpBuffer(q, ", %s", dictname);
+		/* starting a new token type, so start a new command */
+		if (i > 0)
+			appendPQExpBufferStr(q, ";\n");
+		appendPQExpBuffer(q, "\nALTER TEXT SEARCH CONFIGURATION %s\n",
+						  fmtId(cfginfo->dobj.name));
+		/* tokenname needs quoting, dictname does NOT */
+		appendPQExpBuffer(q, "    ADD MAPPING FOR %s WITH \n%s",
+						  fmtId(tokenname), dictname);
 	}
 
 	if (ntups > 0)
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 0688571..98f000b 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -4580,13 +4580,7 @@ describeOneTSConfig(const char *oid, const char *nspname, const char *cfgname,
 					  "  ( SELECT t.alias FROM\n"
 					  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
 					  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
-					  "  pg_catalog.btrim(\n"
-					  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
-					  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
-					  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
-					  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
-					  "    ) :: pg_catalog.text,\n"
-					  "  '{}') AS \"%s\"\n"
+					  " dictionary_map_to_text(m.mapcfg, m.maptokentype) AS \"%s\"\n"
 					  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
 					  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
 					  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h
index 9a7f5b2..362fd17 100644
--- a/src/include/catalog/catversion.h
+++ b/src/include/catalog/catversion.h
@@ -53,6 +53,6 @@
  */
 
 /*							yyyymmddN */
-#define CATALOG_VERSION_NO	201710161
+#define CATALOG_VERSION_NO	201710181
 
 #endif
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index ef84936..db487cf 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -260,7 +260,7 @@ DECLARE_UNIQUE_INDEX(pg_ts_config_cfgname_index, 3608, on pg_ts_config using btr
 DECLARE_UNIQUE_INDEX(pg_ts_config_oid_index, 3712, on pg_ts_config using btree(oid oid_ops));
 #define TSConfigOidIndexId	3712
 
-DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops, mapseqno int4_ops));
+DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops));
 #define TSConfigMapIndexId	3609
 
 DECLARE_UNIQUE_INDEX(pg_ts_dict_dictname_index, 3604, on pg_ts_dict using btree(dictname name_ops, dictnamespace oid_ops));
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 93c031a..572374e 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -4925,6 +4925,12 @@ DESCR("transform jsonb to tsvector");
 DATA(insert OID = 4212 (  to_tsvector		PGNSP PGUID 12 100 0 0 0 f f f f t f i s 2 0 3614 "3734 114" _null_ _null_ _null_ _null_ _null_ json_to_tsvector_byid _null_ _null_ _null_ ));
 DESCR("transform json to tsvector");
 
+DATA(insert OID = 8891 (  dictionary_map_to_text	PGNSP PGUID 12 100 0 0 0 f f f f t f s s 2 0 25 "26 23" _null_ _null_ _null_ _null_ _null_ dictionary_map_to_text _null_ _null_ _null_ ));
+DESCR("returns text representation of dictionary configurationconfiguration  map");
+
+DATA(insert OID = 8892 (  ts_debug			PGNSP PGUID 12 100 1 0 0 f f f f t t s s 2 0 2249 "3734 25" "{3734,25,25,25,25,25,25,1009}" "{i,i,o,o,o,o,o,o}" "{cfgId,inputText,alias,description,token,dictionaries,command,lexemes}" _null_ _null_ ts_debug _null_ _null_ _null_));
+DESCR("debug function for text search configuration");
+
 DATA(insert OID = 3752 (  tsvector_update_trigger			PGNSP PGUID 12 1 0 0 0 f f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_byid _null_ _null_ _null_ ));
 DESCR("trigger for automatic update of tsvector column");
 DATA(insert OID = 3753 (  tsvector_update_trigger_column	PGNSP PGUID 12 1 0 0 0 f f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_bycolumn _null_ _null_ _null_ ));
diff --git a/src/include/catalog/pg_ts_config_map.h b/src/include/catalog/pg_ts_config_map.h
index 3df0519..ea0fd0a 100644
--- a/src/include/catalog/pg_ts_config_map.h
+++ b/src/include/catalog/pg_ts_config_map.h
@@ -22,6 +22,7 @@
 #define PG_TS_CONFIG_MAP_H
 
 #include "catalog/genbki.h"
+#include "utils/jsonb.h"
 
 /* ----------------
  *		pg_ts_config_map definition.  cpp turns this into
@@ -30,49 +31,106 @@
  */
 #define TSConfigMapRelationId	3603
 
+typedef Jsonb jsonb;
+
 CATALOG(pg_ts_config_map,3603) BKI_WITHOUT_OIDS
 {
 	Oid			mapcfg;			/* OID of configuration owning this entry */
 	int32		maptokentype;	/* token type from parser */
-	int32		mapseqno;		/* order in which to consult dictionaries */
-	Oid			mapdict;		/* dictionary to consult */
+	jsonb		mapdicts;		/* dictionary map Jsonb representation */
 } FormData_pg_ts_config_map;
 
 typedef FormData_pg_ts_config_map *Form_pg_ts_config_map;
 
+typedef struct TSMapExpression
+{
+	int			operator;
+	Oid			dictionary;
+	int			options;
+	bool		is_true;
+	struct TSMapExpression *left;
+	struct TSMapExpression *right;
+} TSMapExpression;
+
+typedef struct TSMapCommand
+{
+	bool		is_expression;
+	void	   *ruleList;		/* this is a TSMapRuleList object */
+	TSMapExpression *expression;
+} TSMapCommand;
+
+typedef struct TSMapCondition
+{
+	TSMapExpression *expression;
+} TSMapCondition;
+
+typedef struct TSMapRule
+{
+	Oid			dictionary;
+	TSMapCondition condition;
+	TSMapCommand command;
+} TSMapRule;
+
+typedef struct TSMapRuleList
+{
+	TSMapRule  *data;
+	int			count;
+} TSMapRuleList;
+
 /* ----------------
  *		compiler constants for pg_ts_config_map
  * ----------------
  */
-#define Natts_pg_ts_config_map				4
+#define Natts_pg_ts_config_map				3
 #define Anum_pg_ts_config_map_mapcfg		1
 #define Anum_pg_ts_config_map_maptokentype	2
-#define Anum_pg_ts_config_map_mapseqno		3
-#define Anum_pg_ts_config_map_mapdict		4
+#define Anum_pg_ts_config_map_mapdicts		3
+
+/* ----------------
+ *		Dictionary map operators
+ * ----------------
+ */
+#define DICTMAP_OP_OR			1
+#define DICTMAP_OP_AND			2
+#define DICTMAP_OP_THEN			3
+#define DICTMAP_OP_MAPBY		4
+#define DICTMAP_OP_UNION		5
+#define DICTMAP_OP_EXCEPT		6
+#define DICTMAP_OP_INTERSECT	7
+#define DICTMAP_OP_NOT			8
+
+/* ----------------
+ *		Dictionary map operant options (bit mask)
+ * ----------------
+ */
+
+#define DICTMAP_OPT_NOT			1
+#define DICTMAP_OPT_IS_NULL		2
+#define DICTMAP_OPT_IS_STOP		4
 
 /* ----------------
  *		initial contents of pg_ts_config_map
  * ----------------
  */
 
-DATA(insert ( 3748	1	1	3765 ));
-DATA(insert ( 3748	2	1	3765 ));
-DATA(insert ( 3748	3	1	3765 ));
-DATA(insert ( 3748	4	1	3765 ));
-DATA(insert ( 3748	5	1	3765 ));
-DATA(insert ( 3748	6	1	3765 ));
-DATA(insert ( 3748	7	1	3765 ));
-DATA(insert ( 3748	8	1	3765 ));
-DATA(insert ( 3748	9	1	3765 ));
-DATA(insert ( 3748	10	1	3765 ));
-DATA(insert ( 3748	11	1	3765 ));
-DATA(insert ( 3748	15	1	3765 ));
-DATA(insert ( 3748	16	1	3765 ));
-DATA(insert ( 3748	17	1	3765 ));
-DATA(insert ( 3748	18	1	3765 ));
-DATA(insert ( 3748	19	1	3765 ));
-DATA(insert ( 3748	20	1	3765 ));
-DATA(insert ( 3748	21	1	3765 ));
-DATA(insert ( 3748	22	1	3765 ));
+DATA(insert ( 3748	1	"[3765]" ));
+DATA(insert ( 3748	2	"[3765]" ));
+DATA(insert ( 3748	3	"[3765]" ));
+DATA(insert ( 3748	4	"[3765]" ));
+DATA(insert ( 3748	5	"[3765]" ));
+DATA(insert ( 3748	6	"[3765]" ));
+DATA(insert ( 3748	7	"[3765]" ));
+DATA(insert ( 3748	8	"[3765]" ));
+DATA(insert ( 3748	9	"[3765]" ));
+DATA(insert ( 3748	10	"[3765]" ));
+DATA(insert ( 3748	11	"[3765]" ));
+DATA(insert ( 3748	15	"[3765]" ));
+DATA(insert ( 3748	16	"[3765]" ));
+DATA(insert ( 3748	17	"[3765]" ));
+DATA(insert ( 3748	18	"[3765]" ));
+DATA(insert ( 3748	19	"[3765]" ));
+DATA(insert ( 3748	20	"[3765]" ));
+DATA(insert ( 3748	21	"[3765]" ));
+DATA(insert ( 3748	22	"[3765]" ));
 
 #endif							/* PG_TS_CONFIG_MAP_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index ffeeb49..d956b56 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -380,6 +380,8 @@ typedef enum NodeTag
 	T_CreateEnumStmt,
 	T_CreateRangeStmt,
 	T_AlterEnumStmt,
+	T_DictMapExprElem,
+	T_DictMapElem,
 	T_AlterTSDictionaryStmt,
 	T_AlterTSConfigurationStmt,
 	T_CreateFdwStmt,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 732e5d6..af4e961 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3369,6 +3369,33 @@ typedef enum AlterTSConfigType
 	ALTER_TSCONFIG_DROP_MAPPING
 } AlterTSConfigType;
 
+typedef enum DictPipeElemType
+{
+	DICT_MAP_OPERAND,
+	DICT_MAP_OPERATOR,
+	DICT_MAP_CONST_TRUE
+} DictPipeType;
+
+typedef struct DictMapExprElem
+{
+	NodeTag		type;
+	int8		kind;			/* See DictMapExprElemType */
+	List	   *dictname;		/* Used in DICT_MAP_EXPR_OPERAND */
+	struct DictMapExprElem *left;	/* Used in DICT_MAP_EXPR_OPERATOR */
+	struct DictMapExprElem *right;	/* Used in DICT_MAP_EXPR_OPERATOR */
+	int8		oper;			/* Used in DICT_MAP_EXPR_OPERATOR */
+	int8		options;		/* Can be used in the future */
+} DictMapExprElem;
+
+typedef struct DictMapElem
+{
+	NodeTag		type;
+	DictMapExprElem *condition;
+	DictMapExprElem *command;
+	List	   *commandmaps;
+	List	   *dictnames;
+} DictMapElem;
+
 typedef struct AlterTSConfigurationStmt
 {
 	NodeTag		type;
@@ -3381,6 +3408,7 @@ typedef struct AlterTSConfigurationStmt
 	 */
 	List	   *tokentype;		/* list of Value strings */
 	List	   *dicts;			/* list of list of Value strings */
+	List	   *dict_map;
 	bool		override;		/* if true - remove old variant */
 	bool		replace;		/* if true - replace dictionary by another */
 	bool		missing_ok;		/* for DROP - skip error if missing? */
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index f50e45e..5100aac 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -240,6 +240,7 @@ PG_KEYWORD("location", LOCATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("lock", LOCK_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("locked", LOCKED, UNRESERVED_KEYWORD)
 PG_KEYWORD("logged", LOGGED, UNRESERVED_KEYWORD)
+PG_KEYWORD("map", MAP, UNRESERVED_KEYWORD)
 PG_KEYWORD("mapping", MAPPING, UNRESERVED_KEYWORD)
 PG_KEYWORD("match", MATCH, UNRESERVED_KEYWORD)
 PG_KEYWORD("materialized", MATERIALIZED, UNRESERVED_KEYWORD)
@@ -376,6 +377,7 @@ PG_KEYWORD("statement", STATEMENT, UNRESERVED_KEYWORD)
 PG_KEYWORD("statistics", STATISTICS, UNRESERVED_KEYWORD)
 PG_KEYWORD("stdin", STDIN, UNRESERVED_KEYWORD)
 PG_KEYWORD("stdout", STDOUT, UNRESERVED_KEYWORD)
+PG_KEYWORD("stopword", STOPWORD, UNRESERVED_KEYWORD)
 PG_KEYWORD("storage", STORAGE, UNRESERVED_KEYWORD)
 PG_KEYWORD("strict", STRICT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("strip", STRIP_P, UNRESERVED_KEYWORD)
diff --git a/src/include/tsearch/ts_cache.h b/src/include/tsearch/ts_cache.h
index abff0fd..bfde460 100644
--- a/src/include/tsearch/ts_cache.h
+++ b/src/include/tsearch/ts_cache.h
@@ -14,6 +14,7 @@
 #define TS_CACHE_H
 
 #include "utils/guc.h"
+#include "catalog/pg_ts_config_map.h"
 
 
 /*
@@ -66,6 +67,7 @@ typedef struct
 {
 	int			len;
 	Oid		   *dictIds;
+	int32	   *dictOptions;
 } ListDictionary;
 
 typedef struct
@@ -77,7 +79,7 @@ typedef struct
 	Oid			prsId;
 
 	int			lenmap;
-	ListDictionary *map;
+	TSMapRuleList **map;
 } TSConfigCacheEntry;
 
 
diff --git a/src/include/tsearch/ts_configmap.h b/src/include/tsearch/ts_configmap.h
new file mode 100644
index 0000000..73b87de
--- /dev/null
+++ b/src/include/tsearch/ts_configmap.h
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.h
+ *	  internal represtation of text search configuration and utilities for it
+ *
+ * Copyright (c) 1998-2017, PostgreSQL Global Development Group
+ *
+ * src/include/tsearch/ts_utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PG_TS_CONFIGMAP_H_
+#define _PG_TS_CONFIGMAP_H_
+
+#include "utils/jsonb.h"
+#include "catalog/pg_ts_config_map.h"
+
+/*
+ * Configuration storage functions
+ * Provide interface to convert ts_configuration into JSONB and vice versa
+ */
+
+/* Convert TSMapRuleList structure into JSONB */
+extern Jsonb *TSMapToJsonb(TSMapRuleList *rules);
+
+/* Extract TSMapRuleList from JSONB formated data */
+extern TSMapRuleList * JsonbToTSMap(Jsonb *json);
+/* Replace all occurances of oldDict by newDict */
+extern void TSMapReplaceDictionary(TSMapRuleList *rules, Oid oldDict, Oid newDict);
+
+/* Return list of all dictionries in rule list in order they are defined in the lsit as array of Oids */
+extern Oid *TSMapGetDictionariesList(TSMapRuleList *rules);
+
+/* Return list of all dictionries in rule list in order they are defined in the list as ListDictionary structure */
+extern ListDictionary *TSMapGetListDictionary(TSMapRuleList *rules);
+
+/* Move rule list into specified memory context */
+extern TSMapRuleList * TSMapMoveToMemoryContext(TSMapRuleList *rules, MemoryContext context);
+/* Free all nodes of the rule list */
+extern void TSMapFree(TSMapRuleList *rules);
+
+/* Print rule in human-readable format */
+extern void TSMapPrintRule(TSMapRule *rule, StringInfo result, int depth);
+
+/* Print rule list in human-readable format */
+extern void TSMapPrintRuleList(TSMapRuleList *rules, StringInfo result, int depth);
+
+#endif							/* _PG_TS_CONFIGMAP_H_ */
diff --git a/src/include/tsearch/ts_public.h b/src/include/tsearch/ts_public.h
index 94ba7fc..e933d7b 100644
--- a/src/include/tsearch/ts_public.h
+++ b/src/include/tsearch/ts_public.h
@@ -14,6 +14,7 @@
 #define _PG_TS_PUBLIC_H_
 
 #include "tsearch/ts_type.h"
+#include "catalog/pg_ts_config_map.h"
 
 /*
  * Parser's framework
@@ -115,6 +116,7 @@ typedef struct
 #define TSL_ADDPOS		0x01
 #define TSL_PREFIX		0x02
 #define TSL_FILTER		0x04
+#define TSL_MULTI		0x08
 
 /*
  * Struct for supporting complex dictionaries like thesaurus.
diff --git a/src/test/regress/expected/oidjoins.out b/src/test/regress/expected/oidjoins.out
index 234b44f..40029f3 100644
--- a/src/test/regress/expected/oidjoins.out
+++ b/src/test/regress/expected/oidjoins.out
@@ -1081,14 +1081,6 @@ WHERE	mapcfg != 0 AND
 ------+--------
 (0 rows)
 
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
- ctid | mapdict 
-------+---------
-(0 rows)
-
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/expected/tsdicts.out b/src/test/regress/expected/tsdicts.out
index 0744ef8..760673c 100644
--- a/src/test/regress/expected/tsdicts.out
+++ b/src/test/regress/expected/tsdicts.out
@@ -420,6 +420,145 @@ SELECT ts_lexize('thesaurus', 'one');
  {1}
 (1 row)
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_multi(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN english_stem OR simple THEN english_stem UNION simple END;
+SELECT to_tsvector('english_multi', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'books');
+    to_tsvector     
+--------------------
+ 'book':1 'books':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN english_stem OR simple THEN english_stem INTERSECT simple END;
+SELECT to_tsvector('english_multi', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'books');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_multi', 'booking');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN english_stem OR simple THEN simple EXCEPT english_stem END;
+SELECT to_tsvector('english_multi', 'book');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_multi', 'books');
+ to_tsvector 
+-------------
+ 'books':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'booking');
+ to_tsvector 
+-------------
+ 'booking':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH ispell;
+SELECT to_tsvector('english_multi', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN ispell THEN ispell
+	ELSE english_stem
+END;
+SELECT to_tsvector('english_multi', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN hunspell THEN english_stem MAP BY hunspell
+	ELSE english_stem
+END;
+SELECT to_tsvector('english_multi', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'booking');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -580,3 +719,74 @@ SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a
  'card':3,10 'invit':2,9 'like':6 'look':5 'order':1,8
 (1 row)
 
+CREATE TEXT SEARCH CONFIGURATION english_multi2(
+					COPY=english_multi
+);
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN english_stem OR simple THEN english_stem UNION simple
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+                                     to_tsvector                                      
+--------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN thesaurus ELSE english_stem
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+              to_tsvector              
+---------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus IS NOT NULL OR english_stem IS NOT NULL THEN thesaurus UNION english_stem
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+                     to_tsvector                     
+-----------------------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5 'supernova':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN simple UNION thesaurus
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+          to_tsvector           
+--------------------------------
+ '1987a':2 'sn':1 'supernova':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN simple UNION thesaurus
+	ELSE simple
+END;
+SELECT to_tsvector('english_multi2', 'one two');
+      to_tsvector       
+------------------------
+ '12':1 'one':1 'two':2
+(1 row)
+
+SELECT to_tsvector('english_multi2', 'one two three');
+            to_tsvector            
+-----------------------------------
+ '123':1 'one':1 'three':3 'two':2
+(1 row)
+
+SELECT to_tsvector('english_multi2', 'one two four');
+           to_tsvector           
+---------------------------------
+ '12':1 'four':3 'one':1 'two':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN thesaurus UNION simple
+	ELSE english_stem UNION simple
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+                                         to_tsvector                                         
+---------------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'ring':3 'rings':3 'sn':5 'supernova':5 'the':1
+(1 row)
+
diff --git a/src/test/regress/expected/tsearch.out b/src/test/regress/expected/tsearch.out
index d63fb12..5b6fe73 100644
--- a/src/test/regress/expected/tsearch.out
+++ b/src/test/regress/expected/tsearch.out
@@ -36,11 +36,11 @@ WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 -----+---------
 (0 rows)
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
- mapcfg | maptokentype | mapseqno 
---------+--------------+----------
+WHERE mapcfg = 0;
+ mapcfg | maptokentype 
+--------+--------------
 (0 rows)
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
@@ -51,8 +51,8 @@ RIGHT JOIN pg_ts_config_map AS m
     ON (tt.cfgid=m.mapcfg AND tt.tokid=m.maptokentype)
 WHERE
     tt.cfgid IS NULL OR tt.tokid IS NULL;
- cfgid | tokid | mapcfg | maptokentype | mapseqno | mapdict 
--------+-------+--------+--------------+----------+---------
+ cfgid | tokid | mapcfg | maptokentype | mapdicts 
+-------+-------+--------+--------------+----------
 (0 rows)
 
 -- test basic text search behavior without indexes, then with
@@ -567,66 +567,65 @@ SELECT length(to_tsvector('english', '345 qwe@efd.r '' http://www.com/ http://ae
 
 -- ts_debug
 SELECT * from ts_debug('english', '<myns:foo-bar_baz.blurfl>abc&nm1;def&#xa9;ghi&#245;jkl</myns:foo-bar_baz.blurfl>');
-   alias   |   description   |           token            |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+----------------------------+----------------+--------------+---------
- tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |              | 
- asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem | {abc}
- entity    | XML entity      | &nm1;                      | {}             |              | 
- asciiword | Word, all ASCII | def                        | {english_stem} | english_stem | {def}
- entity    | XML entity      | &#xa9;                     | {}             |              | 
- asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem | {ghi}
- entity    | XML entity      | &#245;                     | {}             |              | 
- asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem | {jkl}
- tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |              | 
+   alias   |   description   |           token            | dictionaries |   command    | lexemes 
+-----------+-----------------+----------------------------+--------------+--------------+---------
+ tag       | XML tag         | <myns:foo-bar_baz.blurfl>  |              |              | 
+ asciiword | Word, all ASCII | abc                        | english_stem | english_stem | {abc}
+ entity    | XML entity      | &nm1;                      |              |              | 
+ asciiword | Word, all ASCII | def                        | english_stem | english_stem | {def}
+ entity    | XML entity      | &#xa9;                     |              |              | 
+ asciiword | Word, all ASCII | ghi                        | english_stem | english_stem | {ghi}
+ entity    | XML entity      | &#245;                     |              |              | 
+ asciiword | Word, all ASCII | jkl                        | english_stem | english_stem | {jkl}
+ tag       | XML tag         | </myns:foo-bar_baz.blurfl> |              |              | 
 (9 rows)
 
 -- check parsing of URLs
 SELECT * from ts_debug('english', 'http://www.harewoodsolutions.co.uk/press.aspx</span>');
-  alias   |  description  |                 token                  | dictionaries | dictionary |                 lexemes                  
-----------+---------------+----------------------------------------+--------------+------------+------------------------------------------
- protocol | Protocol head | http://                                | {}           |            | 
- url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple     | {www.harewoodsolutions.co.uk/press.aspx}
- host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple     | {www.harewoodsolutions.co.uk}
- url_path | URL path      | /press.aspx                            | {simple}     | simple     | {/press.aspx}
- tag      | XML tag       | </span>                                | {}           |            | 
+  alias   |  description  |                 token                  | dictionaries | command |                 lexemes                  
+----------+---------------+----------------------------------------+--------------+---------+------------------------------------------
+ protocol | Protocol head | http://                                |              |         | 
+ url      | URL           | www.harewoodsolutions.co.uk/press.aspx | simple       | simple  | {www.harewoodsolutions.co.uk/press.aspx}
+ host     | Host          | www.harewoodsolutions.co.uk            | simple       | simple  | {www.harewoodsolutions.co.uk}
+ url_path | URL path      | /press.aspx                            | simple       | simple  | {/press.aspx}
+ tag      | XML tag       | </span>                                |              |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
-  alias   |  description  |           token            | dictionaries | dictionary |           lexemes            
-----------+---------------+----------------------------+--------------+------------+------------------------------
- protocol | Protocol head | http://                    | {}           |            | 
- url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple     | {aew.wer0c.ewr/id?ad=qwe&dw}
- host     | Host          | aew.wer0c.ewr              | {simple}     | simple     | {aew.wer0c.ewr}
- url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple     | {/id?ad=qwe&dw}
- tag      | XML tag       | <span>                     | {}           |            | 
+  alias   |  description  |           token            | dictionaries | command |           lexemes            
+----------+---------------+----------------------------+--------------+---------+------------------------------
+ protocol | Protocol head | http://                    |              |         | 
+ url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | simple       | simple  | {aew.wer0c.ewr/id?ad=qwe&dw}
+ host     | Host          | aew.wer0c.ewr              | simple       | simple  | {aew.wer0c.ewr}
+ url_path | URL path      | /id?ad=qwe&dw              | simple       | simple  | {/id?ad=qwe&dw}
+ tag      | XML tag       | <span>                     |              |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
-  alias   |  description  |        token         | dictionaries | dictionary |        lexemes         
-----------+---------------+----------------------+--------------+------------+------------------------
- protocol | Protocol head | http://              | {}           |            | 
- url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple     | {5aew.werc.ewr:8100/?}
- host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path      | /?                   | {simple}     | simple     | {/?}
+  alias   |  description  |        token         | dictionaries | command |        lexemes         
+----------+---------------+----------------------+--------------+---------+------------------------
+ protocol | Protocol head | http://              |              |         | 
+ url      | URL           | 5aew.werc.ewr:8100/? | simple       | simple  | {5aew.werc.ewr:8100/?}
+ host     | Host          | 5aew.werc.ewr:8100   | simple       | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path      | /?                   | simple       | simple  | {/?}
 (4 rows)
 
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
-  alias   | description |         token          | dictionaries | dictionary |         lexemes          
-----------+-------------+------------------------+--------------+------------+--------------------------
- url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple     | {5aew.werc.ewr:8100/?xx}
- host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path    | /?xx                   | {simple}     | simple     | {/?xx}
+  alias   | description |         token          | dictionaries | command |         lexemes          
+----------+-------------+------------------------+--------------+---------+--------------------------
+ url      | URL         | 5aew.werc.ewr:8100/?xx | simple       | simple  | {5aew.werc.ewr:8100/?xx}
+ host     | Host        | 5aew.werc.ewr:8100     | simple       | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path    | /?xx                   | simple       | simple  | {/?xx}
 (3 rows)
 
 SELECT token, alias,
-  dictionaries, dictionaries is null as dnull, array_dims(dictionaries) as ddims,
-  lexemes, lexemes is null as lnull, array_dims(lexemes) as ldims
+  dictionaries, lexemes, lexemes is null as lnull, array_dims(lexemes) as ldims
 from ts_debug('english', 'a title');
- token |   alias   |  dictionaries  | dnull | ddims | lexemes | lnull | ldims 
--------+-----------+----------------+-------+-------+---------+-------+-------
- a     | asciiword | {english_stem} | f     | [1:1] | {}      | f     | 
-       | blank     | {}             | f     |       |         | t     | 
- title | asciiword | {english_stem} | f     | [1:1] | {titl}  | f     | [1:1]
+ token |   alias   | dictionaries | lexemes | lnull | ldims 
+-------+-----------+--------------+---------+-------+-------
+ a     | asciiword | english_stem | {}      | f     | 
+       | blank     |              |         | t     | 
+ title | asciiword | english_stem | {titl}  | f     | [1:1]
 (3 rows)
 
 -- to_tsquery
diff --git a/src/test/regress/sql/oidjoins.sql b/src/test/regress/sql/oidjoins.sql
index fcf9990..320e220 100644
--- a/src/test/regress/sql/oidjoins.sql
+++ b/src/test/regress/sql/oidjoins.sql
@@ -541,10 +541,6 @@ SELECT	ctid, mapcfg
 FROM	pg_catalog.pg_ts_config_map fk
 WHERE	mapcfg != 0 AND
 	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_config pk WHERE pk.oid = fk.mapcfg);
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/sql/tsdicts.sql b/src/test/regress/sql/tsdicts.sql
index a5a569e..337302b 100644
--- a/src/test/regress/sql/tsdicts.sql
+++ b/src/test/regress/sql/tsdicts.sql
@@ -117,6 +117,68 @@ CREATE TEXT SEARCH DICTIONARY thesaurus (
 
 SELECT ts_lexize('thesaurus', 'one');
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_multi(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN english_stem OR simple THEN english_stem UNION simple END;
+
+SELECT to_tsvector('english_multi', 'book');
+SELECT to_tsvector('english_multi', 'books');
+SELECT to_tsvector('english_multi', 'booking');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN english_stem OR simple THEN english_stem INTERSECT simple END;
+
+SELECT to_tsvector('english_multi', 'book');
+SELECT to_tsvector('english_multi', 'books');
+SELECT to_tsvector('english_multi', 'booking');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN english_stem OR simple THEN simple EXCEPT english_stem END;
+
+SELECT to_tsvector('english_multi', 'book');
+SELECT to_tsvector('english_multi', 'books');
+SELECT to_tsvector('english_multi', 'booking');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH ispell;
+
+SELECT to_tsvector('english_multi', 'book');
+SELECT to_tsvector('english_multi', 'books');
+SELECT to_tsvector('english_multi', 'booking');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN ispell THEN ispell
+	ELSE english_stem
+END;
+
+SELECT to_tsvector('english_multi', 'book');
+SELECT to_tsvector('english_multi', 'books');
+SELECT to_tsvector('english_multi', 'booking');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN hunspell THEN english_stem MAP BY hunspell
+	ELSE english_stem
+END;
+
+SELECT to_tsvector('english_multi', 'book');
+SELECT to_tsvector('english_multi', 'books');
+SELECT to_tsvector('english_multi', 'booking');
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -188,3 +250,41 @@ ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR
 SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one');
 SELECT to_tsvector('thesaurus_tst', 'Supernovae star is very new star and usually called supernovae (abbreviation SN)');
 SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a tickets');
+CREATE TEXT SEARCH CONFIGURATION english_multi2(
+					COPY=english_multi
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN english_stem OR simple THEN english_stem UNION simple
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN thesaurus ELSE english_stem
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus IS NOT NULL OR english_stem IS NOT NULL THEN thesaurus UNION english_stem
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN simple UNION thesaurus
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN simple UNION thesaurus
+	ELSE simple
+END;
+SELECT to_tsvector('english_multi2', 'one two');
+SELECT to_tsvector('english_multi2', 'one two three');
+SELECT to_tsvector('english_multi2', 'one two four');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN thesaurus UNION simple
+	ELSE english_stem UNION simple
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+
diff --git a/src/test/regress/sql/tsearch.sql b/src/test/regress/sql/tsearch.sql
index 1c8520b..8ef3d71 100644
--- a/src/test/regress/sql/tsearch.sql
+++ b/src/test/regress/sql/tsearch.sql
@@ -26,9 +26,9 @@ SELECT oid, cfgname
 FROM pg_ts_config
 WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
+WHERE mapcfg = 0;
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
 SELECT * FROM
@@ -146,8 +146,7 @@ SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
 SELECT token, alias,
-  dictionaries, dictionaries is null as dnull, array_dims(dictionaries) as ddims,
-  lexemes, lexemes is null as lnull, array_dims(lexemes) as ldims
+  dictionaries, lexemes, lexemes is null as lnull, array_dims(lexemes) as ldims
 from ts_debug('english', 'a title');
 
 -- to_tsquery
#2Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Aleksandr Parfenov (#1)
1 attachment(s)
Re: Flexible configuration for full-text search

In attachment updated patch with fixes of empty XML tags in
documentation.

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

Attachments:

0001-flexible-fts-configuration-v2.patchtext/x-patchDownload
diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml
index b44aac9..ddbe4e4 100644
--- a/doc/src/sgml/ref/alter_tsconfig.sgml
+++ b/doc/src/sgml/ref/alter_tsconfig.sgml
@@ -22,8 +22,12 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_expression</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_expression</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING REPLACE <replaceable class="parameter">old_dictionary</replaceable> WITH <replaceable class="parameter">new_dictionary</replaceable>
@@ -89,6 +93,16 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
    </varlistentry>
 
    <varlistentry>
+    <term><replaceable class="parameter">dictionary_expression</replaceable></term>
+    <listitem>
+     <para>
+      The expression of dictionaries tree. The dctionary expression
+      is a list of condition/command pairs that define way to process text.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry>
     <term><replaceable class="parameter">old_dictionary</replaceable></term>
     <listitem>
      <para>
@@ -133,7 +147,7 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
      </para>
     </listitem>
    </varlistentry>
- </variablelist>
+  </variablelist>
 
   <para>
    The <literal>ADD MAPPING FOR</literal> form installs a list of dictionaries to be
@@ -155,6 +169,64 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
  </refsect1>
 
  <refsect1>
+  <title>Dictionaries expression</title>
+
+  <refsect2>
+   <title>Format</title>
+   <programlisting>
+    CASE
+      WHEN <replaceable class="parameter">condition</replaceable> THEN <replaceable class="parameter">command</replaceable>
+      [ WHEN <replaceable class="parameter">condition</replaceable> THEN <replaceable class="parameter">command</replaceable> ]
+      [ ELSE <replaceable class="parameter">command</replaceable> ]
+    END
+   </programlisting>
+   <para>
+    A condition is
+   </para>
+
+   <programlisting>
+    dictionary_name [IS [NOT] {NULL|STOPWORD}] [ {AND|OR} ... ]
+    or
+    (dictionary_name MAP BY dictionary_name) IS [NOT] {NULL|STOPWORD} [ {AND|OR} ... ]
+   </programlisting>
+
+   <para>
+    And command is:
+   </para>
+
+   <programlisting>
+    dictionary_name [ {UNION|INTERSECT|EXCEPT|MAP BY} ... ]
+   </programlisting>
+  </refsect2>
+
+  <refsect2>
+   <title>Condition</title>
+   <para>
+    Condition used to determine a command for token processing. A condition is
+    boolean expression. A dictionary can be tested for <literal>NULL</literal>-output
+    or stop-word output via options <literal>IS [NOT] {NULL|STOPWORD}</literal>. If none
+    of test options is mentied (<literal>dictionary_name</literal> without additional
+    options) it is tested for both not <literal>NULL</literal> and not stop word output.
+   </para>
+  </refsect2>
+
+  <refsect2>
+   <title>Command</title>
+   <para>
+    A command describes how <productname>PostgreSQL</productname> should build
+    a result set for current token. Output of each dictionary is set of lexemes.
+    Result of dictionaries can be combined with help of operators
+    <literal>UNION</literal>, <literal>EXCEPT</literal>, <literal>INTERSECT</literal> and a special
+    operator <literal> MAP BY</literal>. <literal>MAP BY</literal> operator uses output of
+    right subexpression as an input for left subexpression. If right subexpression
+    output is <literal>NULL</literal>, initial token is used instead. If the output contains
+    multiple lexemes, each lexeme used as token for left subexpression
+    independently and final results is combined.
+   </para>
+  </refsect2>
+ </refsect1>
+
+ <refsect1>
   <title>Examples</title>
 
   <para>
diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml
index 7b4912d..58bf43a 100644
--- a/doc/src/sgml/textsearch.sgml
+++ b/doc/src/sgml/textsearch.sgml
@@ -732,10 +732,11 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     The <function>to_tsvector</function> function internally calls a parser
     which breaks the document text into tokens and assigns a type to
     each token.  For each token, a list of
-    dictionaries (<xref linkend="textsearch-dictionaries">) is consulted,
-    where the list can vary depending on the token type.  The first dictionary
-    that <firstterm>recognizes</firstterm> the token emits one or more normalized
-    <firstterm>lexemes</firstterm> to represent the token.  For example,
+    condition/command pairs is consulted, where the list can vary depending
+    on the token type, condition and command are logical and set expressions
+    on dictionaries(<xref linkend="textsearch-dictionaries">) respectively.
+    The first pair with true-resulted condition emits one or more normalized
+    <firstterm>lexemes</firstterm> to represent the token based on command.  For example,
     <literal>rats</literal> became <literal>rat</literal> because one of the
     dictionaries recognized that the word <literal>rats</literal> is a plural
     form of <literal>rat</literal>.  Some words are recognized as
@@ -743,7 +744,7 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     causes them to be ignored since they occur too frequently to be useful in
     searching.  In our example these are
     <literal>a</literal>, <literal>on</literal>, and <literal>it</literal>.
-    If no dictionary in the list recognizes the token then it is also ignored.
+    If none of conditions is <literal>true</literal> the token then is also ignored.
     In this example that happened to the punctuation sign <literal>-</literal>
     because there are in fact no dictionaries assigned for its token type
     (<literal>Space symbols</literal>), meaning space tokens will never be
@@ -2232,7 +2233,9 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
       a single lexeme with the <literal>TSL_FILTER</literal> flag set, to replace
       the original token with a new token to be passed to subsequent
       dictionaries (a dictionary that does this is called a
-      <firstterm>filtering dictionary</firstterm>)
+      <firstterm>filtering dictionary</firstterm>). This behavior is applicable only
+      with comma-separated configuration
+      (see <xref linkend="SQL-ALTERTSCONFIG"> for more information)
      </para>
     </listitem>
     <listitem>
@@ -2264,38 +2267,85 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
    type that the parser can return, a separate list of dictionaries is
    specified by the configuration.  When a token of that type is found
    by the parser, each dictionary in the list is consulted in turn,
-   until some dictionary recognizes it as a known word.  If it is identified
-   as a stop word, or if no dictionary recognizes the token, it will be
-   discarded and not indexed or searched for.
-   Normally, the first dictionary that returns a non-<literal>NULL</literal>
-   output determines the result, and any remaining dictionaries are not
-   consulted; but a filtering dictionary can replace the given word
-   with a modified word, which is then passed to subsequent dictionaries.
+   until command is not selected based on it's condition. If none of cases is
+   selected it will be discarded and not indexed or searched for.
   </para>
 
   <para>
-   The general rule for configuring a list of dictionaries
+   A list of cases is described as condition/command pairs. Each condition is
+   evaluated in order to select appropriate command to generate resulted set
+   of lexems.
+  </para>
+
+  <para>
+   A condition is a boolean expression with dictionaries used as operands and
+   basic logic operators <literal>AND</literal>, <literal>OR</literal>, <literal>NOT</literal> and
+   special operator <literal>MAP BY</literal>. In addition to operators, each operand
+   could contain <literal>IS [NOT] NULL</literal> or <literal>IS [NOT] STOPWORD</literal> option
+   to mark way to interpret lexemes as boolean value. If no options are mentioned
+   it is interpret as <literal>dictionary IS NOT NULL AND dictionary IS NOT STOPWORD</literal>.
+
+   Special operator <literal>MAP BY</literal> is used to use output of right-hand
+   subexpression as input for left-hand one. In condition left and right
+   subexpressions can be either another <literal>MAP BY</literal> expression or
+   dictionary expression. Result of <literal>MAP BY</literal> should be explicitly
+   makred for boolean interpretation.
+  </para>
+
+  <para>
+   A command is a set expression with dictionaries used as operands and basic
+   set operators <literal>UNION</literal>, <literal>EXCEPT</literal>, <literal>INTERSECT</literal>
+   and special operator <literal>MAP BY</literal>. The behavior of <literal>MAP BY</literal>
+   operator is similar to one in condition but without restrictions on content
+   of subexpressions since all operators operates on sets.
+  </para>
+
+  <para>
+   The general rule for configuring a list of condition/command pairs
    is to place first the most narrow, most specific dictionary, then the more
-   general dictionaries, finishing with a very general dictionary, like
+   general dictionaries, finishing with a very general dictionaries, like
    a <application>Snowball</application> stemmer or <literal>simple</literal>, which
-   recognizes everything.  For example, for an astronomy-specific search
+   recognizes everything. For example, for an astronomy-specific search
    (<literal>astro_en</literal> configuration) one could bind token type
    <type>asciiword</type> (ASCII word) to a synonym dictionary of astronomical
    terms, a general English dictionary and a <application>Snowball</application> English
-   stemmer:
+   stemmer via comma-separated variant of mapping:
 
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION astro_en
     ADD MAPPING FOR asciiword WITH astrosyn, english_ispell, english_stem;
 </programlisting>
+
+   Another example is a configuration for both english and german languages via
+   operator-separated variant of mapping:
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION multi_en_de
+    ADD MAPPING FOR asciiword, word WITH
+        CASE
+            WHEN english_ispell AND german_ispell THEN
+                 english_ispell UNION german_ispell
+            WHEN english_ispell THEN
+                 english_ispell UNION german_stem
+            WHEN german_ispell THEN
+                 german_ispell UNION english_stem
+            ELSE
+                 english_stem UNION german_stem
+        END;
+</programlisting>
+
   </para>
 
   <para>
-   A filtering dictionary can be placed anywhere in the list, except at the
-   end where it'd be useless.  Filtering dictionaries are useful to partially
+   A filtering dictionary can be placed anywhere in comma-separated list,
+   except at the end where it'd be useless.
+   Filtering dictionaries are useful to partially
    normalize words to simplify the task of later dictionaries.  For example,
    a filtering dictionary could be used to remove accents from accented
    letters, as is done by the <xref linkend="unaccent"> module.
+   Otherwise filter dictionary should be placed at righthand of <literal>MAP BY</literal>
+   operator. If filter dictionary returns <literal>NULL</literal> it pass initial token
+   further in processing chain.
   </para>
 
   <sect2 id="textsearch-stopwords">
@@ -2462,9 +2512,9 @@ SELECT ts_lexize('public.simple_dict','The');
 
 <screen>
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari}
+   alias   |   description   | token | dictionaries |   command    | lexemes 
+-----------+-----------------+-------+--------------+--------------+---------
+ asciiword | Word, all ASCII | Paris | english_stem | english_stem | {pari}
 
 CREATE TEXT SEARCH DICTIONARY my_synonym (
     TEMPLATE = synonym,
@@ -2476,9 +2526,9 @@ ALTER TEXT SEARCH CONFIGURATION english
     WITH my_synonym, english_stem;
 
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |       dictionaries        | dictionary | lexemes 
------------+-----------------+-------+---------------------------+------------+---------
- asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris}
+   alias   |   description   | token |      dictionaries       |  command   | lexemes 
+-----------+-----------------+-------+-------------------------+------------+---------
+ asciiword | Word, all ASCII | Paris | my_synonym,english_stem | my_synonym | {paris}
 </screen>
    </para>
 
@@ -3107,6 +3157,20 @@ CREATE TEXT SEARCH DICTIONARY english_ispell (
 ALTER TEXT SEARCH CONFIGURATION pg
     ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
                       word, hword, hword_part
+    WITH 
+      CASE
+        WHEN pg_dict IS NOT NULL THEN pg_dict
+        WHEN english_ispell THEN english_ispell
+        ELSE english_stem
+      END;
+</programlisting>
+
+    Or use alternative comma-separated syntax:
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION pg
+    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
+                      word, hword, hword_part
     WITH pg_dict, english_ispell, english_stem;
 </programlisting>
 
@@ -3177,13 +3241,13 @@ SHOW default_text_search_config;
   </indexterm>
 
 <synopsis>
-ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type>, </optional> <replaceable class="parameter">document</replaceable> <type>text</type>,
-         OUT <replaceable class="parameter">alias</replaceable> <type>text</type>,
-         OUT <replaceable class="parameter">description</replaceable> <type>text</type>,
-         OUT <replaceable class="parameter">token</replaceable> <type>text</type>,
-         OUT <replaceable class="parameter">dictionaries</replaceable> <type>regdictionary[]</type>,
-         OUT <replaceable class="parameter">dictionary</replaceable> <type>regdictionary</type>,
-         OUT <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)
+ts_debug(<optional> <replaceable class="PARAMETER">config</replaceable> <type>regconfig</type>, </optional> <replaceable class="PARAMETER">document</replaceable> <type>text</type>,
+         OUT <replaceable class="PARAMETER">alias</replaceable> <type>text</type>,
+         OUT <replaceable class="PARAMETER">description</replaceable> <type>text</type>,
+         OUT <replaceable class="PARAMETER">token</replaceable> <type>text</type>,
+         OUT <replaceable class="PARAMETER">dictionaries</replaceable> <type>text</type>,
+         OUT <replaceable class="PARAMETER">command</replaceable> <type>text</type>,
+         OUT <replaceable class="PARAMETER">lexemes</replaceable> <type>text[]</type>)
          returns setof record
 </synopsis>
 
@@ -3220,20 +3284,20 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionaries</replaceable> <type>regdictionary[]</type> &mdash; the
-       dictionaries selected by the configuration for this token type
+       <replaceable>dictionaries</replaceable> <type>text</type> &mdash; the
+       dictionaries defined by the configuration for this token type
       </para>
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionary</replaceable> <type>regdictionary</type> &mdash; the dictionary
-       that recognized the token, or <literal>NULL</literal> if none did
+       <replaceable>command</replaceable> <type>text</type> &mdash; the command that describes
+       the way to generate output
       </para>
      </listitem>
      <listitem>
       <para>
        <replaceable>lexemes</replaceable> <type>text[]</type> &mdash; the lexeme(s) produced
-       by the dictionary that recognized the token, or <literal>NULL</literal> if
+       by the command selected acording conditions, or <literal>NULL</literal> if
        none did; an empty array (<literal>{}</literal>) means it was recognized as a
        stop word
       </para>
@@ -3246,32 +3310,32 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
 
 <screen>
 SELECT * FROM ts_debug('english','a fat  cat sat on a mat - it ate a fat rats');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | cat   | {english_stem} | english_stem | {cat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | sat   | {english_stem} | english_stem | {sat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | on    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | mat   | {english_stem} | english_stem | {mat}
- blank     | Space symbols   |       | {}             |              | 
- blank     | Space symbols   | -     | {}             |              | 
- asciiword | Word, all ASCII | it    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | ate   | {english_stem} | english_stem | {ate}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | rats  | {english_stem} | english_stem | {rat}
+   alias   |   description   | token | dictionaries |   command    | lexemes 
+-----------+-----------------+-------+--------------+--------------+---------
+ asciiword | Word, all ASCII | a     | english_stem | english_stem | {}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | fat   | english_stem | english_stem | {fat}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | cat   | english_stem | english_stem | {cat}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | sat   | english_stem | english_stem | {sat}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | on    | english_stem | english_stem | {}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | a     | english_stem | english_stem | {}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | mat   | english_stem | english_stem | {mat}
+ blank     | Space symbols   |       |              |              | 
+ blank     | Space symbols   | -     |              |              | 
+ asciiword | Word, all ASCII | it    | english_stem | english_stem | {}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | ate   | english_stem | english_stem | {ate}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | a     | english_stem | english_stem | {}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | fat   | english_stem | english_stem | {fat}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | rats  | english_stem | english_stem | {rat}
 </screen>
   </para>
 
@@ -3297,13 +3361,13 @@ ALTER TEXT SEARCH CONFIGURATION public.english
 
 <screen>
 SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
-   alias   |   description   |    token    |         dictionaries          |   dictionary   |   lexemes   
------------+-----------------+-------------+-------------------------------+----------------+-------------
- asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | english_ispell | {}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | english_ispell | {bright}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | english_stem   | {supernova}
+   alias   |   description   |    token    |        dictionaries         |    command     |   lexemes   
+-----------+-----------------+-------------+-----------------------------+----------------+-------------
+ asciiword | Word, all ASCII | The         | english_ispell,english_stem | english_ispell | {}
+ blank     | Space symbols   |             |                             |                | 
+ asciiword | Word, all ASCII | Brightest   | english_ispell,english_stem | english_ispell | {bright}
+ blank     | Space symbols   |             |                             |                | 
+ asciiword | Word, all ASCII | supernovaes | english_ispell,english_stem | english_stem   | {supernova}
 </screen>
 
   <para>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index dc40cde..74cab6b 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -944,55 +944,13 @@ GRANT SELECT (subdbid, subname, subowner, subenabled, subslotname, subpublicatio
 -- Tsearch debug function.  Defined here because it'd be pretty unwieldy
 -- to put it into pg_proc.h
 
-CREATE FUNCTION ts_debug(IN config regconfig, IN document text,
-    OUT alias text,
-    OUT description text,
-    OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
-    OUT lexemes text[])
-RETURNS SETOF record AS
-$$
-SELECT
-    tt.alias AS alias,
-    tt.description AS description,
-    parse.token AS token,
-    ARRAY ( SELECT m.mapdict::pg_catalog.regdictionary
-            FROM pg_catalog.pg_ts_config_map AS m
-            WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-            ORDER BY m.mapseqno )
-    AS dictionaries,
-    ( SELECT mapdict::pg_catalog.regdictionary
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS dictionary,
-    ( SELECT pg_catalog.ts_lexize(mapdict, parse.token)
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS lexemes
-FROM pg_catalog.ts_parse(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 ), $2
-    ) AS parse,
-     pg_catalog.ts_token_type(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 )
-    ) AS tt
-WHERE tt.tokid = parse.tokid
-$$
-LANGUAGE SQL STRICT STABLE PARALLEL SAFE;
-
-COMMENT ON FUNCTION ts_debug(regconfig,text) IS
-    'debug function for text search configuration';
 
 CREATE FUNCTION ts_debug(IN document text,
     OUT alias text,
     OUT description text,
     OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
+    OUT dictionaries text,
+    OUT dictionary text,
     OUT lexemes text[])
 RETURNS SETOF record AS
 $$
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index adc7cd6..a0f1650 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -39,9 +39,12 @@
 #include "nodes/makefuncs.h"
 #include "parser/parse_func.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_public.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/jsonb.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "utils/syscache.h"
@@ -52,6 +55,7 @@ static void MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 						 HeapTuple tup, Relation relMap);
 static void DropConfigurationMapping(AlterTSConfigurationStmt *stmt,
 						 HeapTuple tup, Relation relMap);
+static TSMapRuleList *ParseTSMapList(List *dictMapList);
 
 
 /* --------------------- TS Parser commands ------------------------ */
@@ -935,11 +939,21 @@ makeConfigurationDependencies(HeapTuple tuple, bool removeOld,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			TSMapRuleList *mapdicts = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			Oid		   *dictionaryOids = TSMapGetDictionariesList(mapdicts);
+			Oid		   *currentOid = dictionaryOids;
 
-			referenced.classId = TSDictionaryRelationId;
-			referenced.objectId = cfgmap->mapdict;
-			referenced.objectSubId = 0;
-			add_exact_object_address(&referenced, addrs);
+			while (*currentOid != InvalidOid)
+			{
+				referenced.classId = TSDictionaryRelationId;
+				referenced.objectId = *currentOid;
+				referenced.objectSubId = 0;
+				add_exact_object_address(&referenced, addrs);
+
+				currentOid++;
+			}
+			pfree(dictionaryOids);
+			pfree(mapdicts);
 		}
 
 		systable_endscan(scan);
@@ -1091,8 +1105,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			mapvalues[Anum_pg_ts_config_map_mapcfg - 1] = cfgOid;
 			mapvalues[Anum_pg_ts_config_map_maptokentype - 1] = cfgmap->maptokentype;
-			mapvalues[Anum_pg_ts_config_map_mapseqno - 1] = cfgmap->mapseqno;
-			mapvalues[Anum_pg_ts_config_map_mapdict - 1] = cfgmap->mapdict;
+			mapvalues[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(&cfgmap->mapdicts);
 
 			newmaptup = heap_form_tuple(mapRel->rd_att, mapvalues, mapnulls);
 
@@ -1195,7 +1208,7 @@ AlterTSConfiguration(AlterTSConfigurationStmt *stmt)
 	relMap = heap_open(TSConfigMapRelationId, RowExclusiveLock);
 
 	/* Add or drop mappings */
-	if (stmt->dicts)
+	if (stmt->dicts || stmt->dict_map)
 		MakeConfigurationMapping(stmt, tup, relMap);
 	else if (stmt->tokentype)
 		DropConfigurationMapping(stmt, tup, relMap);
@@ -1271,6 +1284,105 @@ getTokenTypes(Oid prsId, List *tokennames)
 	return res;
 }
 
+static TSMapExpression *
+ParseTSMapExpression(DictMapExprElem *head)
+{
+	TSMapExpression *result;
+
+	if (head == NULL)
+		return NULL;
+
+	result = palloc0(sizeof(TSMapExpression));
+
+	if (head->kind == DICT_MAP_OPERATOR)
+	{
+		result->left = ParseTSMapExpression(head->left);
+		result->right = ParseTSMapExpression(head->right);
+		result->operator = head->oper;
+		result->options = head->options;
+	}
+	else if (head->kind == DICT_MAP_CONST_TRUE)
+	{
+		result->left = result->right = NULL;
+		result->is_true = true;
+		result->options = result->operator = 0;
+	}
+	else						/* head->kind == DICT_MAP_OPERAND */
+	{
+		result->dictionary = get_ts_dict_oid(head->dictname, false);
+		result->options = head->options;
+	}
+
+	return result;
+}
+
+static TSMapRule
+ParseTSMapRule(DictMapElem *elem)
+{
+	TSMapRule	result;
+
+	memset(&result, 0, sizeof(result));
+
+	result.condition.expression = ParseTSMapExpression(elem->condition);
+	if (elem->commandmaps)
+	{
+		result.command.ruleList = ParseTSMapList(elem->commandmaps);
+		result.command.is_expression = false;
+		result.command.expression = NULL;
+	}
+	else
+	{
+		result.command.ruleList = NULL;
+		result.command.is_expression = true;
+		result.command.expression = ParseTSMapExpression(elem->command);
+	}
+
+	return result;
+}
+
+static TSMapRuleList *
+ParseTSMapList(List *dictMapList)
+{
+	int			i;
+	TSMapRuleList *result;
+	ListCell   *c;
+
+	if (list_length(dictMapList) == 1 && ((DictMapElem *) lfirst(dictMapList->head))->dictnames)
+	{
+		DictMapElem *elem = (DictMapElem *) lfirst(dictMapList->head);
+
+		result = palloc0(sizeof(TSMapRuleList));
+		result->count = list_length(elem->dictnames);
+		result->data = palloc0(sizeof(TSMapRule) * result->count);
+
+		i = 0;
+		foreach(c, elem->dictnames)
+		{
+			List	   *names = (List *) lfirst(c);
+
+			result->data[i].dictionary = get_ts_dict_oid(names, false);
+			i++;
+		}
+	}
+	else
+	{
+		result = palloc0(sizeof(TSMapRuleList));
+		result->count = list_length(dictMapList);
+		result->data = palloc0(sizeof(TSMapRule) * result->count);
+
+		i = 0;
+		foreach(c, dictMapList)
+		{
+			List	   *l = (List *) lfirst(c);
+
+			result->data[i] = ParseTSMapRule((DictMapElem *) l);
+			i++;
+		}
+	}
+
+	return result;
+}
+
 /*
  * ALTER TEXT SEARCH CONFIGURATION ADD/ALTER MAPPING
  */
@@ -1287,8 +1399,9 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	Oid			prsId;
 	int		   *tokens,
 				ntoken;
-	Oid		   *dictIds;
-	int			ndict;
+	Oid		   *dictIds = NULL;
+	int			ndict = 0;
+	TSMapRuleList *mapRules = NULL;
 	ListCell   *c;
 
 	prsId = ((Form_pg_ts_config) GETSTRUCT(tup))->cfgparser;
@@ -1327,17 +1440,23 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	/*
 	 * Convert list of dictionary names to array of dict OIDs
 	 */
-	ndict = list_length(stmt->dicts);
-	dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
-	i = 0;
-	foreach(c, stmt->dicts)
+	if (stmt->dicts)
 	{
-		List	   *names = (List *) lfirst(c);
+		ndict = list_length(stmt->dicts);
+		dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
+		i = 0;
+		foreach(c, stmt->dicts)
+		{
+			List	   *names = (List *) lfirst(c);
 
-		dictIds[i] = get_ts_dict_oid(names, false);
-		i++;
+			dictIds[i] = get_ts_dict_oid(names, false);
+			i++;
+		}
 	}
 
+	if (stmt->dict_map)
+		mapRules = ParseTSMapList(stmt->dict_map);
+
 	if (stmt->replace)
 	{
 		/*
@@ -1357,6 +1476,10 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			Datum		repl_val[Natts_pg_ts_config_map];
+			bool		repl_null[Natts_pg_ts_config_map];
+			bool		repl_repl[Natts_pg_ts_config_map];
+			HeapTuple	newtup;
 
 			/*
 			 * check if it's one of target token types
@@ -1380,25 +1503,21 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 			/*
 			 * replace dictionary if match
 			 */
-			if (cfgmap->mapdict == dictOld)
-			{
-				Datum		repl_val[Natts_pg_ts_config_map];
-				bool		repl_null[Natts_pg_ts_config_map];
-				bool		repl_repl[Natts_pg_ts_config_map];
-				HeapTuple	newtup;
-
-				memset(repl_val, 0, sizeof(repl_val));
-				memset(repl_null, false, sizeof(repl_null));
-				memset(repl_repl, false, sizeof(repl_repl));
-
-				repl_val[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictNew);
-				repl_repl[Anum_pg_ts_config_map_mapdict - 1] = true;
-
-				newtup = heap_modify_tuple(maptup,
-										   RelationGetDescr(relMap),
-										   repl_val, repl_null, repl_repl);
-				CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
-			}
+			mapRules = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			TSMapReplaceDictionary(mapRules, dictOld, dictNew);
+
+			memset(repl_val, 0, sizeof(repl_val));
+			memset(repl_null, false, sizeof(repl_null));
+			memset(repl_repl, false, sizeof(repl_repl));
+
+			repl_val[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(mapRules));
+			repl_repl[Anum_pg_ts_config_map_mapdicts - 1] = true;
+
+			newtup = heap_modify_tuple(maptup,
+									   RelationGetDescr(relMap),
+									   repl_val, repl_null, repl_repl);
+			CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
+			pfree(mapRules);
 		}
 
 		systable_endscan(scan);
@@ -1408,24 +1527,21 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		/*
 		 * Insertion of new entries
 		 */
+
 		for (i = 0; i < ntoken; i++)
 		{
-			for (j = 0; j < ndict; j++)
-			{
-				Datum		values[Natts_pg_ts_config_map];
-				bool		nulls[Natts_pg_ts_config_map];
+			Datum		values[Natts_pg_ts_config_map];
+			bool		nulls[Natts_pg_ts_config_map];
 
-				memset(nulls, false, sizeof(nulls));
-				values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
-				values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
-				values[Anum_pg_ts_config_map_mapseqno - 1] = Int32GetDatum(j + 1);
-				values[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictIds[j]);
+			memset(nulls, false, sizeof(nulls));
+			values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
+			values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
+			values[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(mapRules));
 
-				tup = heap_form_tuple(relMap->rd_att, values, nulls);
-				CatalogTupleInsert(relMap, tup);
+			tup = heap_form_tuple(relMap->rd_att, values, nulls);
+			CatalogTupleInsert(relMap, tup);
 
-				heap_freetuple(tup);
-			}
+			heap_freetuple(tup);
 		}
 	}
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index c1a83ca..476e8da 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4371,6 +4371,32 @@ _copyReassignOwnedStmt(const ReassignOwnedStmt *from)
 	return newnode;
 }
 
+static DictMapExprElem *
+_copyDictMapExprElem(const DictMapExprElem *from)
+{
+	DictMapExprElem *newnode = makeNode(DictMapExprElem);
+
+	COPY_NODE_FIELD(dictname);
+	COPY_NODE_FIELD(left);
+	COPY_NODE_FIELD(right);
+	COPY_SCALAR_FIELD(kind);
+	COPY_SCALAR_FIELD(oper);
+	COPY_SCALAR_FIELD(options);
+
+	return newnode;
+}
+
+static DictMapElem *
+_copyDictMapElem(const DictMapElem *from)
+{
+	DictMapElem *newnode = makeNode(DictMapElem);
+
+	COPY_NODE_FIELD(condition);
+	COPY_NODE_FIELD(command);
+
+	return newnode;
+}
+
 static AlterTSDictionaryStmt *
 _copyAlterTSDictionaryStmt(const AlterTSDictionaryStmt *from)
 {
@@ -5373,6 +5399,12 @@ copyObjectImpl(const void *from)
 		case T_ReassignOwnedStmt:
 			retval = _copyReassignOwnedStmt(from);
 			break;
+		case T_DictMapExprElem:
+			retval = _copyDictMapExprElem(from);
+			break;
+		case T_DictMapElem:
+			retval = _copyDictMapElem(from);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _copyAlterTSDictionaryStmt(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 7a70001..4434566 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2177,6 +2177,28 @@ _equalReassignOwnedStmt(const ReassignOwnedStmt *a, const ReassignOwnedStmt *b)
 }
 
 static bool
+_equalDictMapExprElem(const DictMapExprElem *a, const DictMapExprElem *b)
+{
+	COMPARE_NODE_FIELD(dictname);
+	COMPARE_NODE_FIELD(left);
+	COMPARE_NODE_FIELD(right);
+	COMPARE_SCALAR_FIELD(kind);
+	COMPARE_SCALAR_FIELD(oper);
+	COMPARE_SCALAR_FIELD(options);
+
+	return true;
+}
+
+static bool
+_equalDictMapElem(const DictMapElem *a, const DictMapElem *b)
+{
+	COMPARE_NODE_FIELD(condition);
+	COMPARE_NODE_FIELD(command);
+
+	return true;
+}
+
+static bool
 _equalAlterTSDictionaryStmt(const AlterTSDictionaryStmt *a, const AlterTSDictionaryStmt *b)
 {
 	COMPARE_NODE_FIELD(dictname);
@@ -3517,6 +3539,12 @@ equal(const void *a, const void *b)
 		case T_ReassignOwnedStmt:
 			retval = _equalReassignOwnedStmt(a, b);
 			break;
+		case T_DictMapExprElem:
+			retval = _equalDictMapExprElem(a, b);
+			break;
+		case T_DictMapElem:
+			retval = _equalDictMapElem(a, b);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _equalAlterTSDictionaryStmt(a, b);
 			break;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 4c83a63..6a14890 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -52,6 +52,7 @@
 #include "catalog/namespace.h"
 #include "catalog/pg_am.h"
 #include "catalog/pg_trigger.h"
+#include "catalog/pg_ts_config_map.h"
 #include "commands/defrem.h"
 #include "commands/trigger.h"
 #include "nodes/makefuncs.h"
@@ -241,6 +242,8 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionSpec		*partspec;
 	PartitionBoundSpec	*partboundspec;
 	RoleSpec			*rolespec;
+	DictMapExprElem		*dmapexpr;
+	DictMapElem			*dmap;
 }
 
 %type <node>	stmt schema_stmt
@@ -396,8 +399,9 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				relation_expr_list dostmt_opt_list
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
-				publication_name_list
 				vacuum_relation_list opt_vacuum_relation_list
+				publication_name_list dictionary_map_list dictionary_map
+				dictionary_map_case
 
 %type <list>	group_by_list
 %type <node>	group_by_item empty_grouping_set rollup_clause cube_clause
@@ -581,6 +585,15 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>		partbound_datum PartitionRangeDatum
 %type <list>		partbound_datum_list range_datum_list
 
+%type <ival>		dictionary_map_clause_expr_dict_not dictionary_map_clause_expr_dict_flag
+%type <dmapexpr>	dictionary_map_clause dictionary_map_clause_expr_not
+					dictionary_map_command dictionary_map_command_expr_paren
+					dictionary_map_dict dictionary_map_clause_expr_or
+					dictionary_map_clause_expr_and dictionary_map_clause_expr_mapby_ext
+					dictionary_map_clause_expr_mapby
+					dictionary_map_clause_expr_paren dictionary_map_clause_expr_dict
+%type <dmap>		dictionary_map_else dictionary_map_element
+
 /*
  * Non-keyword token types.  These are hard-wired into the "flex" lexer.
  * They must be listed first so that their numeric codes do not depend on
@@ -648,7 +661,8 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	LEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL
 	LOCALTIME LOCALTIMESTAMP LOCATION LOCK_P LOCKED LOGGED
 
-	MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE MONTH_P MOVE
+	MAP MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE
+	MONTH_P MOVE
 
 	NAME_P NAMES NATIONAL NATURAL NCHAR NEW NEXT NO NONE
 	NOT NOTHING NOTIFY NOTNULL NOWAIT NULL_P NULLIF
@@ -671,7 +685,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	SAVEPOINT SCHEMA SCHEMAS SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES
 	SERIALIZABLE SERVER SESSION SESSION_USER SET SETS SETOF SHARE SHOW
 	SIMILAR SIMPLE SKIP SMALLINT SNAPSHOT SOME SQL_P STABLE STANDALONE_P
-	START STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P STRIP_P
+	START STATEMENT STATISTICS STDIN STDOUT STOPWORD STORAGE STRICT_P STRIP_P
 	SUBSCRIPTION SUBSTRING SYMMETRIC SYSID SYSTEM_P
 
 	TABLE TABLES TABLESAMPLE TABLESPACE TEMP TEMPLATE TEMPORARY TEXT_P THEN
@@ -10005,24 +10019,26 @@ AlterTSDictionaryStmt:
 		;
 
 AlterTSConfigurationStmt:
-			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with any_name_list
+			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with dictionary_map
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ADD_MAPPING;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = false;
 					n->replace = false;
 					$$ = (Node*)n;
 				}
-			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with any_name_list
+			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with dictionary_map
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ALTER_MAPPING_FOR_TOKEN;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = true;
 					n->replace = false;
 					$$ = (Node*)n;
@@ -10074,6 +10090,272 @@ any_with:	WITH									{}
 			| WITH_LA								{}
 		;
 
+dictionary_map:
+			dictionary_map_case { $$ = $1; }
+			| any_name_list
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->condition = NULL;
+				n->command = NULL;
+				n->commandmaps = NULL;
+				n->dictnames = $1;
+				$$ = list_make1(n);
+			}
+		;
+
+dictionary_map_case:
+			CASE dictionary_map_list END_P
+			{
+				$$ = $2;
+			}
+			| CASE dictionary_map_list dictionary_map_else END_P
+			{
+				$$ = lappend($2, $3);
+			}
+		;
+
+dictionary_map_list:
+			dictionary_map_element							{ $$ = list_make1($1); }
+			| dictionary_map_list dictionary_map_element	{ $$ = lappend($1, $2); }
+		;
+
+dictionary_map_else:
+			ELSE dictionary_map_command
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->command = $2;
+				n->commandmaps = NULL;
+				n->dictnames = NULL;
+
+				n->condition = makeNode(DictMapExprElem);
+				n->condition->kind = DICT_MAP_CONST_TRUE;
+				n->condition->oper = 0;
+				n->condition->options = 0;
+				n->condition->left = NULL;
+				n->condition->right = NULL;
+
+				$$ = n;
+			}
+			| ELSE dictionary_map_case
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->command = NULL;
+				n->commandmaps = $2;
+				n->dictnames = NULL;
+
+				n->condition = makeNode(DictMapExprElem);
+				n->condition->kind = DICT_MAP_CONST_TRUE;
+				n->condition->oper = 0;
+				n->condition->options = 0;
+				n->condition->left = NULL;
+				n->condition->right = NULL;
+
+				$$ = n;
+			}
+		;
+
+dictionary_map_element:
+			WHEN dictionary_map_clause THEN dictionary_map_command
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->condition = $2;
+				n->command = $4;
+				n->commandmaps = NULL;
+				n->dictnames = NULL;
+				$$ = n;
+			}
+			| WHEN dictionary_map_clause THEN dictionary_map_case
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->condition = $2;
+				n->command = NULL;
+				n->commandmaps = $4;
+				n->dictnames = NULL;
+				$$ = n;
+			}
+		;
+
+dictionary_map_clause:
+			dictionary_map_clause_expr_or { $$ = $1; }
+		;
+
+dictionary_map_clause_expr_or:
+			dictionary_map_clause_expr_and OR dictionary_map_clause_expr_or
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_OR;
+				n->options = 0;
+				n->left = $1;
+				n->right = $3;
+				$$ = n;
+			}
+			| dictionary_map_clause_expr_and { $$ = $1; }
+		;
+
+dictionary_map_clause_expr_and:
+			dictionary_map_clause_expr_not AND dictionary_map_clause_expr_and
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_AND;
+				n->options = 0;
+				n->left = $1;
+				n->right = $3;
+				$$ = n;
+			}
+			| dictionary_map_clause_expr_not { $$ = $1; }
+		;
+
+dictionary_map_clause_expr_mapby_ext:
+			dictionary_map_clause_expr_dict MAP BY dictionary_map_clause_expr_mapby_ext
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_MAPBY;
+				n->options = 0;
+				n->left = $1;
+				n->right = $4;
+				$$ = n;
+			}
+			| dictionary_map_clause_expr_dict { $$ = $1; }
+		;
+
+dictionary_map_clause_expr_mapby:
+			dictionary_map_clause_expr_dict MAP BY dictionary_map_clause_expr_mapby_ext
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_MAPBY;
+				n->options = 0;
+				n->left = $1;
+				n->right = $4;
+				$$ = n;
+			}
+		;
+
+dictionary_map_clause_expr_not:
+			NOT dictionary_map_clause_expr_not
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_NOT;
+				n->options = 0;
+				n->left = NULL;
+				n->right = $2;
+				$$ = n;
+			}
+			| dictionary_map_clause_expr_paren { $$ = $1; }
+		;
+
+dictionary_map_clause_expr_paren:
+			'(' dictionary_map_clause_expr_or ')'	{ $$ = $2; }
+			| '(' dictionary_map_clause_expr_mapby ')' IS dictionary_map_clause_expr_dict_not dictionary_map_clause_expr_dict_flag
+			{
+				$$ = $2;
+				$$->options = $5 | $6;
+			}
+			| '(' dictionary_map_clause_expr_mapby ')'
+			{
+				$$ = $2;
+				$$->options = DICTMAP_OPT_NOT | DICTMAP_OPT_IS_NULL | DICTMAP_OPT_IS_STOP;
+			}
+			| dictionary_map_clause_expr_dict		{ $$ = $1; }
+		;
+
+dictionary_map_clause_expr_dict:
+			any_name
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERAND;
+				n->dictname = $1;
+				n->oper = 0;
+				n->options = DICTMAP_OPT_NOT | DICTMAP_OPT_IS_NULL | DICTMAP_OPT_IS_STOP;
+				n->left = n->right = NULL;
+				$$ = n;
+			}
+			| any_name IS dictionary_map_clause_expr_dict_not dictionary_map_clause_expr_dict_flag
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERAND;
+				n->dictname = $1;
+				n->oper = 0;
+				n->options = $3 | $4;
+				n->left = n->right = NULL;
+				$$ = n;
+			}
+		;
+
+dictionary_map_clause_expr_dict_not:
+			NOT				{ $$ = DICTMAP_OPT_NOT; }
+			| /* EMPTY */	{ $$ = 0; }
+		;
+
+dictionary_map_clause_expr_dict_flag:
+			NULL_P			{ $$ = DICTMAP_OPT_IS_NULL; }
+			| STOPWORD		{ $$ = DICTMAP_OPT_IS_STOP; }
+		;
+
+dictionary_map_command:
+			dictionary_map_command_expr_paren { $$ = $1; }
+			| dictionary_map_command_expr_paren UNION dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_UNION;
+				n->options = 0;
+				n->left = $1;
+				n->right = $3;
+				$$ = n;
+			}
+			| dictionary_map_command_expr_paren EXCEPT dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_EXCEPT;
+				n->options = 0;
+				n->left = $1;
+				n->right = $3;
+				$$ = n;
+			}
+			| dictionary_map_command_expr_paren INTERSECT dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_INTERSECT;
+				n->options = 0;
+				n->left = $1;
+				n->right = $3;
+				$$ = n;
+			}
+			| dictionary_map_command_expr_paren MAP BY dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERATOR;
+				n->oper = DICTMAP_OP_MAPBY;
+				n->options = 0;
+				n->left = $1;
+				n->right = $4;
+				$$ = n;
+			}
+		;
+
+dictionary_map_command_expr_paren:
+			'(' dictionary_map_command ')'	{ $$ = $2; }
+			| dictionary_map_dict			{ $$ = $1; }
+		;
+
+dictionary_map_dict:
+			any_name
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				n->kind = DICT_MAP_OPERAND;
+				n->dictname = $1;
+				n->options = 0;
+				n->left = n->right = NULL;
+				$$ = n;
+			}
+		;
 
 /*****************************************************************************
  *
@@ -14728,6 +15010,7 @@ unreserved_keyword:
 			| LOCK_P
 			| LOCKED
 			| LOGGED
+			| MAP
 			| MAPPING
 			| MATCH
 			| MATERIALIZED
@@ -14831,6 +15114,7 @@ unreserved_keyword:
 			| STATISTICS
 			| STDIN
 			| STDOUT
+			| STOPWORD
 			| STORAGE
 			| STRICT_P
 			| STRIP_P
diff --git a/src/backend/tsearch/Makefile b/src/backend/tsearch/Makefile
index 34fe4c5..24e47f2 100644
--- a/src/backend/tsearch/Makefile
+++ b/src/backend/tsearch/Makefile
@@ -26,7 +26,7 @@ DICTFILES_PATH=$(addprefix dicts/,$(DICTFILES))
 OBJS = ts_locale.o ts_parse.o wparser.o wparser_def.o dict.o \
 	dict_simple.o dict_synonym.o dict_thesaurus.o \
 	dict_ispell.o regis.o spell.o \
-	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o
+	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o ts_configmap.o
 
 include $(top_srcdir)/src/backend/common.mk
 
diff --git a/src/backend/tsearch/ts_configmap.c b/src/backend/tsearch/ts_configmap.c
new file mode 100644
index 0000000..a7d9e0c
--- /dev/null
+++ b/src/backend/tsearch/ts_configmap.c
@@ -0,0 +1,976 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.c
+ *		internal represtation of text search configuration and utilities for it
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/tsearch/ts_confimap.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include <ctype.h>
+
+#include "access/heapam.h"
+#include "access/genam.h"
+#include "access/htup_details.h"
+#include "access/sysattr.h"
+#include "catalog/indexing.h"
+#include "catalog/pg_ts_dict.h"
+#include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "utils/fmgroids.h"
+
+/*
+ * Used during the parsing of TSMapRuleList from JSONB into internal
+ * datastructures.
+ */
+typedef enum TSMapRuleParseState
+{
+	TSMRPS_BEGINING,
+	TSMRPS_IN_CASES_ARRAY,
+	TSMRPS_IN_CASE,
+	TSMRPS_IN_CONDITION,
+	TSMRPS_IN_COMMAND,
+	TSMRPS_IN_EXPRESSION
+} TSMapRuleParseState;
+
+typedef enum TSMapRuleParseNodeType
+{
+	TSMRPT_UNKNOWN,
+	TSMRPT_NUMERIC,
+	TSMRPT_EXPRESSION,
+	TSMRPT_RULE_LIST,
+	TSMRPT_RULE,
+	TSMRPT_COMMAND,
+	TSMRPT_CONDITION,
+	TSMRPT_BOOL
+} TSMapRuleParseNodeType;
+
+typedef struct TSMapParseNode
+{
+	TSMapRuleParseNodeType type;
+	union
+	{
+		int			num_val;
+		bool		bool_val;
+		TSMapRule  *rule_val;
+		TSMapCommand *command_val;
+		TSMapRuleList *rule_list_val;
+		TSMapCondition *condition_val;
+		TSMapExpression *expression_val;
+	};
+} TSMapParseNode;
+
+static JsonbValue *TSMapToJsonbValue(TSMapRuleList *rules, JsonbParseState *jsonb_state);
+static TSMapParseNode *JsonbToTSMapParse(JsonbContainer *root, TSMapRuleParseState *parse_state);
+
+static void
+TSMapPrintDictName(Oid dictId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_ts_dict dict;
+
+	maprel = heap_open(TSDictionaryRelationId, AccessShareLock);
+	mapidx = index_open(TSDictionaryOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(dictId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	dict = (Form_pg_ts_dict) GETSTRUCT(maptup);
+	appendStringInfoString(result, dict->dictname.data);
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+static void
+TSMapExpressionPrint(TSMapExpression *expression, StringInfo result)
+{
+	if (expression->dictionary == InvalidOid && expression->options != 0)
+		appendStringInfoChar(result, '(');
+
+	if (expression->left)
+	{
+		if (expression->left->operator != 0 && expression->left->operator < expression->operator)
+			appendStringInfoChar(result, '(');
+
+		TSMapExpressionPrint(expression->left, result);
+
+		if (expression->left->operator != 0 && expression->left->operator < expression->operator)
+			appendStringInfoChar(result, ')');
+	}
+
+	switch (expression->operator)
+	{
+		case DICTMAP_OP_OR:
+			appendStringInfoString(result, " OR ");
+			break;
+		case DICTMAP_OP_AND:
+			appendStringInfoString(result, " AND ");
+			break;
+		case DICTMAP_OP_NOT:
+			appendStringInfoString(result, " NOT ");
+			break;
+		case DICTMAP_OP_UNION:
+			appendStringInfoString(result, " UNION ");
+			break;
+		case DICTMAP_OP_EXCEPT:
+			appendStringInfoString(result, " EXCEPT ");
+			break;
+		case DICTMAP_OP_INTERSECT:
+			appendStringInfoString(result, " INTERSECT ");
+			break;
+		case DICTMAP_OP_MAPBY:
+			appendStringInfoString(result, " MAP BY ");
+			break;
+	}
+
+	if (expression->right)
+	{
+		if (expression->right->operator != 0 && expression->right->operator < expression->operator)
+			appendStringInfoChar(result, '(');
+
+		TSMapExpressionPrint(expression->right, result);
+
+		if (expression->right->operator != 0 && expression->right->operator < expression->operator)
+			appendStringInfoChar(result, ')');
+	}
+
+	if (expression->dictionary == InvalidOid && expression->options != 0)
+		appendStringInfoChar(result, ')');
+
+	if (expression->dictionary != InvalidOid || expression->options != 0)
+	{
+		if (expression->dictionary != InvalidOid)
+			TSMapPrintDictName(expression->dictionary, result);
+		if (expression->options != (DICTMAP_OPT_NOT | DICTMAP_OPT_IS_NULL | DICTMAP_OPT_IS_STOP))
+		{
+			if (expression->options != 0)
+				appendStringInfoString(result, " IS ");
+			if (expression->options & DICTMAP_OPT_NOT)
+				appendStringInfoString(result, "NOT ");
+			if (expression->options & DICTMAP_OPT_IS_NULL)
+				appendStringInfoString(result, "NULL ");
+			if (expression->options & DICTMAP_OPT_IS_STOP)
+				appendStringInfoString(result, "STOPWORD ");
+		}
+	}
+}
+
+void
+TSMapPrintRule(TSMapRule *rule, StringInfo result, int depth)
+{
+	int			i;
+
+	if (rule->dictionary != InvalidOid)
+	{
+		TSMapPrintDictName(rule->dictionary, result);
+	}
+	else if (rule->condition.expression->is_true)
+	{
+		for (i = 0; i < depth; i++)
+			appendStringInfoChar(result, '\t');
+		appendStringInfoString(result, "ELSE ");
+	}
+	else
+	{
+		for (i = 0; i < depth; i++)
+			appendStringInfoChar(result, '\t');
+		appendStringInfoString(result, "WHEN ");
+		TSMapExpressionPrint(rule->condition.expression, result);
+		appendStringInfoString(result, " THEN\n");
+		for (i = 0; i < depth + 1; i++)
+			appendStringInfoString(result, "\t");
+	}
+
+	if (rule->command.is_expression)
+	{
+		TSMapExpressionPrint(rule->command.expression, result);
+	}
+	else if (rule->dictionary == InvalidOid)
+	{
+		TSMapPrintRuleList(rule->command.ruleList, result, depth + 1);
+	}
+}
+
+void
+TSMapPrintRuleList(TSMapRuleList *rules, StringInfo result, int depth)
+{
+	int			i;
+
+	for (i = 0; i < rules->count; i++)
+	{
+		if (rules->data[i].dictionary != InvalidOid)	/* Comma-separated
+														 * configuration syntax */
+		{
+			if (i > 0)
+				appendStringInfoString(result, ", ");
+			TSMapPrintDictName(rules->data[i].dictionary, result);
+		}
+		else
+		{
+			if (i == 0)
+			{
+				int			j;
+
+				for (j = 0; j < depth; j++)
+					appendStringInfoChar(result, '\t');
+				appendStringInfoString(result, "CASE\n");
+			}
+			else
+				appendStringInfoChar(result, '\n');
+			TSMapPrintRule(&rules->data[i], result, depth + 1);
+		}
+	}
+
+	if (rules->data[0].dictionary == InvalidOid)
+	{
+		appendStringInfoChar(result, '\n');
+		for (i = 0; i < depth; i++)
+			appendStringInfoChar(result, '\t');
+		appendStringInfoString(result, "END");
+	}
+}
+
+Datum
+dictionary_map_to_text(PG_FUNCTION_ARGS)
+{
+	Oid			cfgOid = PG_GETARG_OID(0);
+	int32		tokentype = PG_GETARG_INT32(1);
+	StringInfo	rawResult;
+	text	   *result = NULL;
+	TSConfigCacheEntry *cacheEntry;
+
+	cacheEntry = lookup_ts_config_cache(cfgOid);
+	rawResult = makeStringInfo();
+	initStringInfo(rawResult);
+
+	if (cacheEntry->lenmap > tokentype && cacheEntry->map[tokentype]->count > 0)
+	{
+		TSMapRuleList *rules = cacheEntry->map[tokentype];
+
+		TSMapPrintRuleList(rules, rawResult, 0);
+	}
+
+	if (rawResult)
+	{
+		result = cstring_to_text(rawResult->data);
+		pfree(rawResult);
+	}
+
+	PG_RETURN_TEXT_P(result);
+}
+
+static JsonbValue *
+TSIntToJsonbValue(int int_value)
+{
+	char		buffer[16];
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	memset(buffer, 0, sizeof(char) * 16);
+
+	pg_ltoa(int_value, buffer);
+	value->type = jbvNumeric;
+	value->val.numeric = DatumGetNumeric(DirectFunctionCall3(
+															 numeric_in,
+															 CStringGetDatum(buffer),
+															 ObjectIdGetDatum(InvalidOid),
+															 Int32GetDatum(-1)
+															 ));
+	return value;
+
+}
+
+static JsonbValue *
+TSExpressionToJsonb(TSMapExpression *expression, JsonbParseState *jsonb_state)
+{
+	if (expression == NULL)
+		return NULL;
+	if (expression->dictionary != InvalidOid)
+	{
+		JsonbValue	key;
+		JsonbValue *value = NULL;
+
+		pushJsonbValue(&jsonb_state, WJB_BEGIN_OBJECT, NULL);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("options");
+		key.val.string.val = "options";
+		value = TSIntToJsonbValue(expression->options);
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("dictionary");
+		key.val.string.val = "dictionary";
+		value = TSIntToJsonbValue(expression->dictionary);
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		return pushJsonbValue(&jsonb_state, WJB_END_OBJECT, NULL);
+	}
+	else if (expression->is_true)
+	{
+		JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+		value->type = jbvBool;
+		value->val.boolean = true;
+		return value;
+	}
+	else
+	{
+		JsonbValue	key;
+		JsonbValue *value = NULL;
+
+		pushJsonbValue(&jsonb_state, WJB_BEGIN_OBJECT, NULL);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("operator");
+		key.val.string.val = "operator";
+		value = TSIntToJsonbValue(expression->operator);
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("options");
+		key.val.string.val = "options";
+		value = TSIntToJsonbValue(expression->options);
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("left");
+		key.val.string.val = "left";
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		value = TSExpressionToJsonb(expression->left, jsonb_state);
+		if (value && IsAJsonbScalar(value))
+			pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("right");
+		key.val.string.val = "right";
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		value = TSExpressionToJsonb(expression->right, jsonb_state);
+		if (value && IsAJsonbScalar(value))
+			pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		return pushJsonbValue(&jsonb_state, WJB_END_OBJECT, NULL);
+	}
+}
+
+static JsonbValue *
+TSRuleToJsonbValue(TSMapRule *rule, JsonbParseState *jsonb_state)
+{
+	if (rule->dictionary != InvalidOid)
+	{
+		return TSIntToJsonbValue(rule->dictionary);
+	}
+	else
+	{
+		JsonbValue	key;
+		JsonbValue *value = NULL;
+
+		pushJsonbValue(&jsonb_state, WJB_BEGIN_OBJECT, NULL);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("condition");
+		key.val.string.val = "condition";
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		value = TSExpressionToJsonb(rule->condition.expression, jsonb_state);
+
+		if (IsAJsonbScalar(value))
+			pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		key.type = jbvString;
+		key.val.string.len = strlen("command");
+		key.val.string.val = "command";
+
+		pushJsonbValue(&jsonb_state, WJB_KEY, &key);
+		if (rule->command.is_expression)
+			value = TSExpressionToJsonb(rule->command.expression, jsonb_state);
+		else
+			value = TSMapToJsonbValue(rule->command.ruleList, jsonb_state);
+
+		if (IsAJsonbScalar(value))
+			pushJsonbValue(&jsonb_state, WJB_VALUE, value);
+
+		return pushJsonbValue(&jsonb_state, WJB_END_OBJECT, NULL);
+	}
+}
+
+static JsonbValue *
+TSMapToJsonbValue(TSMapRuleList *rules, JsonbParseState *jsonb_state)
+{
+	JsonbValue *out;
+	int			i;
+
+	pushJsonbValue(&jsonb_state, WJB_BEGIN_ARRAY, NULL);
+	for (i = 0; i < rules->count; i++)
+	{
+		JsonbValue *value = TSRuleToJsonbValue(&rules->data[i], jsonb_state);
+
+		if (IsAJsonbScalar(value))
+			pushJsonbValue(&jsonb_state, WJB_ELEM, value);
+	}
+	out = pushJsonbValue(&jsonb_state, WJB_END_ARRAY, NULL);
+	return out;
+}
+
+Jsonb *
+TSMapToJsonb(TSMapRuleList *rules)
+{
+	JsonbParseState *jsonb_state = NULL;
+	JsonbValue *out;
+	Jsonb	   *result;
+
+	out = TSMapToJsonbValue(rules, jsonb_state);
+
+	result = JsonbValueToJsonb(out);
+	return result;
+}
+
+static inline TSMapExpression *
+JsonbToTSMapGetExpression(TSMapParseNode *node)
+{
+	TSMapExpression *result;
+
+	if (node->type == TSMRPT_NUMERIC)
+	{
+		result = palloc0(sizeof(TSMapExpression));
+		result->dictionary = node->num_val;
+	}
+	else if (node->type == TSMRPT_BOOL)
+	{
+		result = palloc0(sizeof(TSMapExpression));
+		result->is_true = node->bool_val;
+	}
+	else
+		result = node->expression_val;
+
+	pfree(node);
+
+	return result;
+}
+
+static TSMapParseNode *
+JsonbToTSMapParseObject(JsonbValue *value, TSMapRuleParseState *parse_state)
+{
+	TSMapParseNode *result = palloc0(sizeof(TSMapParseNode));
+	char	   *str;
+
+	switch (value->type)
+	{
+		case jbvNumeric:
+			result->type = TSMRPT_NUMERIC;
+			str = DatumGetCString(
+								  DirectFunctionCall1(numeric_out, NumericGetDatum(value->val.numeric)));
+			result->num_val = pg_atoi(str, sizeof(result->num_val), 0);
+			break;
+		case jbvArray:
+			Assert(*parse_state == TSMRPS_IN_COMMAND);
+		case jbvBinary:
+			result = JsonbToTSMapParse(value->val.binary.data, parse_state);
+			break;
+		case jbvBool:
+			result->type = TSMRPT_BOOL;
+			result->bool_val = value->val.boolean;
+			break;
+		case jbvObject:
+		case jbvNull:
+		case jbvString:
+			break;
+	}
+	return result;
+}
+
+static TSMapParseNode *
+JsonbToTSMapParse(JsonbContainer *root, TSMapRuleParseState *parse_state)
+{
+	JsonbIteratorToken r;
+	JsonbValue	val;
+	JsonbIterator *it;
+	TSMapParseNode *result;
+	TSMapParseNode *nested_result;
+	char	   *key;
+	TSMapRuleList *rule_list = NULL;
+
+	it = JsonbIteratorInit(root);
+	result = palloc0(sizeof(TSMapParseNode));
+	result->type = TSMRPT_UNKNOWN;
+	while ((r = JsonbIteratorNext(&it, &val, true)) != WJB_DONE)
+	{
+		switch (r)
+		{
+			case WJB_BEGIN_ARRAY:
+				if (*parse_state == TSMRPS_BEGINING || *parse_state == TSMRPS_IN_EXPRESSION)
+				{
+					*parse_state = TSMRPS_IN_CASES_ARRAY;
+					rule_list = palloc0(sizeof(TSMapRuleList));
+				}
+				break;
+			case WJB_KEY:
+				key = palloc0(sizeof(char) * (val.val.string.len + 1));
+				memcpy(key, val.val.string.val, sizeof(char) * val.val.string.len);
+
+				r = JsonbIteratorNext(&it, &val, true);
+				if (*parse_state == TSMRPS_IN_CASE)
+				{
+					if (strcmp(key, "command") == 0)
+						*parse_state = TSMRPS_IN_EXPRESSION;
+					else if (strcmp(key, "condition") == 0)
+						*parse_state = TSMRPS_IN_EXPRESSION;
+				}
+
+				nested_result = JsonbToTSMapParseObject(&val, parse_state);
+
+				if (result->type == TSMRPT_RULE)
+				{
+					if (strcmp(key, "command") == 0)
+					{
+						result->rule_val->command.is_expression = nested_result->type == TSMRPT_EXPRESSION ||
+							nested_result->type == TSMRPT_NUMERIC;
+
+						if (result->rule_val->command.is_expression)
+							result->rule_val->command.expression = JsonbToTSMapGetExpression(nested_result);
+						else
+							result->rule_val->command.ruleList = nested_result->rule_list_val;
+					}
+					else if (strcmp(key, "condition") == 0)
+					{
+						result->rule_val->condition.expression = JsonbToTSMapGetExpression(nested_result);
+					}
+					*parse_state = TSMRPS_IN_CASE;
+				}
+				else if (result->type == TSMRPT_COMMAND)
+				{
+					result->command_val->is_expression = nested_result->type == TSMRPT_EXPRESSION;
+					if (result->command_val->is_expression)
+						result->command_val->expression = JsonbToTSMapGetExpression(nested_result);
+					else
+						result->command_val->ruleList = nested_result->rule_list_val;
+					*parse_state = TSMRPS_IN_COMMAND;
+				}
+				else if (result->type == TSMRPT_CONDITION)
+				{
+					result->condition_val->expression = JsonbToTSMapGetExpression(nested_result);
+					*parse_state = TSMRPS_IN_COMMAND;
+				}
+				else if (result->type == TSMRPT_EXPRESSION)
+				{
+					if (strcmp(key, "left") == 0)
+						result->expression_val->left = JsonbToTSMapGetExpression(nested_result);
+					else if (strcmp(key, "right") == 0)
+						result->expression_val->right = JsonbToTSMapGetExpression(nested_result);
+					else if (strcmp(key, "operator") == 0)
+						result->expression_val->operator = nested_result->num_val;
+					else if (strcmp(key, "options") == 0)
+						result->expression_val->options = nested_result->num_val;
+					else if (strcmp(key, "dictionary") == 0)
+						result->expression_val->dictionary = nested_result->num_val;
+				}
+
+				break;
+			case WJB_BEGIN_OBJECT:
+				if (*parse_state == TSMRPS_IN_CASES_ARRAY)
+				{
+					*parse_state = TSMRPS_IN_CASE;
+					result->type = TSMRPT_RULE;
+					result->rule_val = palloc0(sizeof(TSMapRule));
+				}
+				else if (*parse_state == TSMRPS_IN_COMMAND)
+				{
+					result->type = TSMRPT_COMMAND;
+					result->command_val = palloc0(sizeof(TSMapCommand));
+				}
+				else if (*parse_state == TSMRPS_IN_CONDITION)
+				{
+					result->type = TSMRPT_CONDITION;
+					result->condition_val = palloc0(sizeof(TSMapCondition));
+				}
+				else if (*parse_state == TSMRPS_IN_EXPRESSION)
+				{
+					result->type = TSMRPT_EXPRESSION;
+					result->expression_val = palloc0(sizeof(TSMapExpression));
+				}
+				break;
+			case WJB_END_OBJECT:
+				if (*parse_state == TSMRPS_IN_CASE)
+					*parse_state = TSMRPS_IN_CASES_ARRAY;
+				else if (*parse_state == TSMRPS_IN_CONDITION || *parse_state == TSMRPS_IN_COMMAND)
+					*parse_state = TSMRPS_IN_CASE;
+				if (rule_list && result->type == TSMRPT_RULE)
+				{
+					rule_list->count++;
+					if (rule_list->data)
+						rule_list->data = repalloc(rule_list->data, sizeof(TSMapRule) * rule_list->count);
+					else
+						rule_list->data = palloc0(sizeof(TSMapRule) * rule_list->count);
+					memcpy(rule_list->data + rule_list->count - 1, result->rule_val, sizeof(TSMapRule));
+				}
+				else
+					return result;
+			case WJB_END_ARRAY:
+				break;
+			default:
+				nested_result = JsonbToTSMapParseObject(&val, parse_state);
+				if (nested_result->type == TSMRPT_NUMERIC)
+				{
+					if (*parse_state == TSMRPS_IN_CASES_ARRAY)
+					{
+						/*
+						 * Add dictionary Oid into array (comma-separated
+						 * configuration)
+						 */
+						rule_list->count++;
+						if (rule_list->data)
+							rule_list->data = repalloc(rule_list->data, sizeof(TSMapRule) * rule_list->count);
+						else
+							rule_list->data = palloc0(sizeof(TSMapRule) * rule_list->count);
+						memset(rule_list->data + rule_list->count - 1, 0, sizeof(TSMapRule));
+						rule_list->data[rule_list->count - 1].dictionary = nested_result->num_val;
+					}
+					else if (result->type == TSMRPT_UNKNOWN && *parse_state == TSMRPS_IN_EXPRESSION)
+					{
+						result->type = TSMRPT_EXPRESSION;
+						result->expression_val = palloc0(sizeof(TSMapExpression));
+					}
+					if (result->type == TSMRPT_EXPRESSION)
+						result->expression_val->dictionary = nested_result->num_val;
+				}
+				else if (nested_result->type == TSMRPT_RULE && rule_list)
+				{
+					rule_list->count++;
+					if (rule_list->data)
+						rule_list->data = repalloc(rule_list->data, sizeof(TSMapRule) * rule_list->count);
+					else
+						rule_list->data = palloc0(sizeof(TSMapRule) * rule_list->count);
+					memcpy(rule_list->data + rule_list->count - 1, nested_result->rule_val, sizeof(TSMapRule));
+				}
+				break;
+		}
+	}
+	result->type = TSMRPT_RULE_LIST;
+	result->rule_list_val = rule_list;
+	return result;
+}
+
+TSMapRuleList *
+JsonbToTSMap(Jsonb *json)
+{
+	JsonbContainer *root = &json->root;
+	TSMapRuleList *result = palloc0(sizeof(TSMapRuleList));
+	TSMapRuleParseState parse_state = TSMRPS_BEGINING;
+	TSMapParseNode *parsing_result;
+
+	parsing_result = JsonbToTSMapParse(root, &parse_state);
+
+	Assert(parsing_result->type == TSMRPT_RULE_LIST);
+	result = parsing_result->rule_list_val;
+	pfree(parsing_result);
+
+	return result;
+}
+
+static void
+TSMapReplaceDictionaryParseExpression(TSMapExpression *expr, Oid oldDict, Oid newDict)
+{
+	if (expr->left)
+		TSMapReplaceDictionaryParseExpression(expr->left, oldDict, newDict);
+	if (expr->right)
+		TSMapReplaceDictionaryParseExpression(expr->right, oldDict, newDict);
+
+	if (expr->dictionary == oldDict)
+		expr->dictionary = newDict;
+}
+
+static void
+TSMapReplaceDictionaryParseMap(TSMapRule *rule, Oid oldDict, Oid newDict)
+{
+	if (rule->dictionary != InvalidOid)
+	{
+		Oid		   *result;
+
+		result = palloc0(sizeof(Oid) * 2);
+		result[0] = rule->dictionary;
+		result[1] = InvalidOid;
+	}
+	else
+	{
+		TSMapReplaceDictionaryParseExpression(rule->condition.expression, oldDict, newDict);
+
+		if (rule->command.is_expression)
+			TSMapReplaceDictionaryParseExpression(rule->command.expression, oldDict, newDict);
+		else
+			TSMapReplaceDictionary(rule->command.ruleList, oldDict, newDict);
+	}
+}
+
+void
+TSMapReplaceDictionary(TSMapRuleList *rules, Oid oldDict, Oid newDict)
+{
+	int			i;
+
+	for (i = 0; i < rules->count; i++)
+		TSMapReplaceDictionaryParseMap(&rules->data[i], oldDict, newDict);
+}
+
+static Oid *
+TSMapGetDictionariesParseExpression(TSMapExpression *expr)
+{
+	Oid		   *left_res;
+	Oid		   *right_res;
+	Oid		   *result;
+
+	left_res = right_res = NULL;
+
+	if (expr->left && expr->right)
+	{
+		Oid		   *ptr;
+		int			count_l;
+		int			count_r;
+
+		left_res = TSMapGetDictionariesParseExpression(expr->left);
+		right_res = TSMapGetDictionariesParseExpression(expr->right);
+
+		for (ptr = left_res, count_l = 0; *ptr != InvalidOid; ptr++)
+			count_l++;
+		for (ptr = right_res, count_r = 0; *ptr != InvalidOid; ptr++)
+			count_r++;
+
+		result = palloc0(sizeof(Oid) * (count_l + count_r + 1));
+		memcpy(result, left_res, sizeof(Oid) * count_l);
+		memcpy(result + count_l, right_res, sizeof(Oid) * count_r);
+		result[count_l + count_r] = InvalidOid;
+
+		pfree(left_res);
+		pfree(right_res);
+	}
+	else
+	{
+		result = palloc0(sizeof(Oid) * 2);
+		result[0] = expr->dictionary;
+		result[1] = InvalidOid;
+	}
+
+	return result;
+}
+
+static Oid *
+TSMapGetDictionariesParseRule(TSMapRule *rule)
+{
+	Oid		   *result;
+
+	if (rule->dictionary)
+	{
+		result = palloc0(sizeof(Oid) * 2);
+		result[0] = rule->dictionary;
+		result[1] = InvalidOid;
+	}
+	else
+	{
+		if (rule->command.is_expression)
+			result = TSMapGetDictionariesParseExpression(rule->command.expression);
+		else
+			result = TSMapGetDictionariesList(rule->command.ruleList);
+	}
+	return result;
+}
+
+Oid *
+TSMapGetDictionariesList(TSMapRuleList *rules)
+{
+	int			i;
+	Oid		  **results_arr;
+	int		   *sizes;
+	Oid		   *result;
+	int			size;
+	int			offset;
+
+	results_arr = palloc0(sizeof(Oid *) * rules->count);
+	sizes = palloc0(sizeof(int) * rules->count);
+	size = 0;
+	for (i = 0; i < rules->count; i++)
+	{
+		int			count;
+		Oid		   *ptr;
+
+		results_arr[i] = TSMapGetDictionariesParseRule(&rules->data[i]);
+
+		for (count = 0, ptr = results_arr[i]; *ptr != InvalidOid; ptr++)
+			count++;
+
+		sizes[i] = count;
+		size += count;
+	}
+
+	result = palloc(sizeof(Oid) * (size + 1));
+	offset = 0;
+	for (i = 0; i < rules->count; i++)
+	{
+		memcpy(result + offset, results_arr[i], sizeof(Oid) * sizes[i]);
+		offset += sizes[i];
+		pfree(results_arr[i]);
+	}
+	result[offset] = InvalidOid;
+
+	pfree(results_arr);
+	pfree(sizes);
+
+	return result;
+}
+
+ListDictionary *
+TSMapGetListDictionary(TSMapRuleList *rules)
+{
+	ListDictionary *result = palloc0(sizeof(ListDictionary));
+	Oid		   *oids = TSMapGetDictionariesList(rules);
+	int			i;
+	int			count;
+	Oid		   *ptr;
+
+	ptr = oids;
+	count = 0;
+	while (*ptr != InvalidOid)
+	{
+		count++;
+		ptr++;
+	}
+
+	result->len = count;
+	result->dictIds = palloc0(sizeof(Oid) * result->len);
+	ptr = oids;
+	i = 0;
+	while (*ptr != InvalidOid)
+		result->dictIds[i++] = *(ptr++);
+
+	return result;
+}
+
+static TSMapExpression *
+TSMapExpressionMoveToMemoryContext(TSMapExpression *expr, MemoryContext context)
+{
+	TSMapExpression *result;
+
+	if (expr == NULL)
+		return NULL;
+	result = MemoryContextAlloc(context, sizeof(TSMapExpression));
+	memset(result, 0, sizeof(TSMapExpression));
+	if (expr->dictionary != InvalidOid || expr->is_true)
+	{
+		result->dictionary = expr->dictionary;
+		result->is_true = expr->is_true;
+		result->options = expr->options;
+		result->left = result->right = NULL;
+		result->operator = 0;
+	}
+	else
+	{
+		result->left = TSMapExpressionMoveToMemoryContext(expr->left, context);
+		result->right = TSMapExpressionMoveToMemoryContext(expr->right, context);
+		result->operator = expr->operator;
+		result->options = expr->options;
+		result->dictionary = InvalidOid;
+		result->is_true = false;
+	}
+	return result;
+}
+
+static TSMapRule
+TSMapRuleMoveToMemoryContext(TSMapRule *rule, MemoryContext context)
+{
+	TSMapRule	result;
+
+	memset(&result, 0, sizeof(TSMapRule));
+
+	if (rule->dictionary)
+	{
+		result.dictionary = rule->dictionary;
+	}
+	else
+	{
+		result.condition.expression = TSMapExpressionMoveToMemoryContext(rule->condition.expression, context);
+
+		result.command.is_expression = rule->command.is_expression;
+		if (rule->command.is_expression)
+			result.command.expression = TSMapExpressionMoveToMemoryContext(rule->command.expression, context);
+		else
+			result.command.ruleList = TSMapMoveToMemoryContext(rule->command.ruleList, context);
+	}
+
+	return result;
+}
+
+TSMapRuleList *
+TSMapMoveToMemoryContext(TSMapRuleList *rules, MemoryContext context)
+{
+	int			i;
+	TSMapRuleList *result = MemoryContextAlloc(context, sizeof(TSMapRuleList));
+
+	memset(result, 0, sizeof(TSMapRuleList));
+
+	result->count = rules->count;
+	result->data = MemoryContextAlloc(context, sizeof(TSMapRule) * result->count);
+
+	for (i = 0; i < result->count; i++)
+		result->data[i] = TSMapRuleMoveToMemoryContext(&rules->data[i], context);
+
+	return result;
+}
+
+static void
+TSMapExpressionFree(TSMapExpression *expression)
+{
+	if (expression->left)
+		TSMapExpressionFree(expression->left);
+	if (expression->right)
+		TSMapExpressionFree(expression->right);
+	pfree(expression);
+}
+
+static void
+TSMapRuleFree(TSMapRule rule)
+{
+	if (rule.dictionary == InvalidOid)
+	{
+		if (rule.command.is_expression)
+			TSMapExpressionFree(rule.command.expression);
+		else
+			TSMapFree(rule.command.ruleList);
+
+		TSMapExpressionFree(rule.condition.expression);
+	}
+}
+
+void
+TSMapFree(TSMapRuleList * rules)
+{
+	int			i;
+
+	for (i = 0; i < rules->count; i++)
+		TSMapRuleFree(rules->data[i]);
+	pfree(rules->data);
+	pfree(rules);
+}
diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c
index ad5dddf..c71658b 100644
--- a/src/backend/tsearch/ts_parse.c
+++ b/src/backend/tsearch/ts_parse.c
@@ -16,6 +16,10 @@
 
 #include "tsearch/ts_cache.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+
+#include "funcapi.h"
 
 #define IGNORE_LONGLEXEME	1
 
@@ -28,328 +32,1296 @@ typedef struct ParsedLex
 	int			type;
 	char	   *lemm;
 	int			lenlemm;
+	int			maplen;
+	bool	   *accepted;
+	bool	   *rejected;
+	bool	   *notFinished;
+	bool	   *holdAccepted;
 	struct ParsedLex *next;
+	TSMapRule  *relatedRule;
 } ParsedLex;
 
-typedef struct ListParsedLex
-{
-	ParsedLex  *head;
-	ParsedLex  *tail;
-} ListParsedLex;
+typedef struct ListParsedLex
+{
+	ParsedLex  *head;
+	ParsedLex  *tail;
+} ListParsedLex;
+
+typedef struct DictState
+{
+	Oid			relatedDictionary;
+	DictSubState subState;
+	ListParsedLex acceptedTokens;	/* Tokens which are processed and
+									 * accepted, used in last returned result
+									 * by the dictionary */
+	ListParsedLex intermediateTokens;	/* Tokens which are not accepted, but
+										 * were processed by thesaurus-like
+										 * dictionry */
+	bool		storeToAccepted;	/* Should current token be appended to
+									 * accepted or intermediate tokens */
+	bool		processed;		/* Is the dictionary take control during
+								 * current token processing */
+	TSLexeme   *tmpResult;		/* Last result retued by thesaurus-like
+								 * dictionary, if dictionary still waiting for
+								 * more lexemes */
+} DictState;
+
+typedef struct DictStateList
+{
+	int			listLength;
+	DictState  *states;
+} DictStateList;
+
+typedef struct LexemesBufferEntry
+{
+	Oid			dictId;
+	ParsedLex  *token;
+	TSLexeme   *data;
+} LexemesBufferEntry;
+
+typedef struct LexemesBuffer
+{
+	int			size;
+	LexemesBufferEntry *data;
+} LexemesBuffer;
+
+typedef struct ResultStorage
+{
+	TSLexeme   *lexemes;		/* Processed lexemes, which is not yet
+								 * accepted */
+	TSLexeme   *accepted;
+} ResultStorage;
+
+typedef struct LexizeData
+{
+	TSConfigCacheEntry *cfg;
+	DictSubState dictState;
+	DictStateList dslist;
+	ListParsedLex towork;		/* current list to work */
+	ListParsedLex waste;		/* list of lexemes that already lexized */
+	LexemesBuffer buffer;
+	ResultStorage delayedResults;
+	Oid			skipDictionary;
+} LexizeData;
+
+typedef struct TSDebugContext
+{
+	TSConfigCacheEntry *cfg;
+	TSParserCacheEntry *prsobj;
+	LexDescr   *tokenTypes;
+	void	   *prsdata;
+	LexizeData	ldata;
+	int			tokentype;		/* Last token tokentype */
+	TSLexeme   *savedLexemes;	/* Last token lexemes stored for ts_debug
+								 * output */
+	ParsedLex  *leftTokens;		/* Corresponded ParsedLex */
+	TSMapRule  *rule;			/* Rule which produced output */
+} TSDebugContext;
+
+static TSLexeme *LexizeExecMapBy(LexizeData *ld, ParsedLex *token, TSMapExpression *left, TSMapExpression *right);
+
+static void
+LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+{
+	ld->cfg = cfg;
+	ld->skipDictionary = InvalidOid;
+	ld->towork.head = ld->towork.tail = NULL;
+	ld->waste.head = ld->waste.tail = NULL;
+	ld->dslist.listLength = 0;
+	ld->dslist.states = NULL;
+	ld->buffer.size = 0;
+	ld->buffer.data = NULL;
+	ld->delayedResults.lexemes = NULL;
+	ld->delayedResults.accepted = NULL;
+}
+
+static void
+LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
+{
+	if (list->tail)
+	{
+		list->tail->next = newpl;
+		list->tail = newpl;
+	}
+	else
+		list->head = list->tail = newpl;
+	newpl->next = NULL;
+}
+
+static void
+LPLAddTailCopy(ListParsedLex *list, ParsedLex *newpl)
+{
+	ParsedLex  *copy = palloc0(sizeof(ParsedLex));
+
+	copy->lenlemm = newpl->lenlemm;
+	copy->type = newpl->type;
+	copy->lemm = newpl->lemm;
+	copy->relatedRule = newpl->relatedRule;
+	copy->next = NULL;
+
+	if (list->tail)
+	{
+		list->tail->next = copy;
+		list->tail = copy;
+	}
+	else
+		list->head = list->tail = copy;
+}
+
+static ParsedLex *
+LPLRemoveHead(ListParsedLex *list)
+{
+	ParsedLex  *res = list->head;
+
+	if (list->head)
+		list->head = list->head->next;
+
+	if (list->head == NULL)
+		list->tail = NULL;
+
+	return res;
+}
+
+static void
+LPLClear(ListParsedLex *list)
+{
+	ParsedLex  *tmp,
+			   *ptr = list->head;
+
+	while (ptr)
+	{
+		tmp = ptr->next;
+		pfree(ptr);
+		ptr = tmp;
+	}
+
+	list->head = list->tail = NULL;
+}
+
+static void
+LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
+{
+	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+	newpl->type = type;
+	newpl->lemm = lemm;
+	newpl->lenlemm = lenlemm;
+	newpl->relatedRule = NULL;
+	LPLAddTail(&ld->towork, newpl);
+}
+
+static void
+RemoveHead(LexizeData *ld)
+{
+	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+}
+
+static void
+setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+{
+	if (correspondLexem)
+	{
+		*correspondLexem = ld->waste.head;
+	}
+	else
+	{
+		LPLClear(&ld->waste);
+	}
+	ld->waste.head = ld->waste.tail = NULL;
+}
+
+static DictState *
+DictStateListGet(DictStateList *list, Oid dictId)
+{
+	int			i;
+	DictState  *result = NULL;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			result = &list->states[i];
+
+	return result;
+}
+
+static void
+DictStateListRemove(DictStateList *list, Oid dictId)
+{
+	int			i;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			break;
+
+	if (i != list->listLength)
+	{
+		memcpy(list->states + i, list->states + i + 1, sizeof(DictState) * (list->listLength - i - 1));
+		list->listLength--;
+		if (list->listLength == 0)
+			list->states = NULL;
+		else
+			list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	}
+}
+
+static DictState *
+DictStateListAdd(DictStateList *list, DictState *state)
+{
+	DictStateListRemove(list, state->relatedDictionary);
+
+	list->listLength++;
+	if (list->states)
+		list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	else
+		list->states = palloc0(sizeof(DictState) * list->listLength);
+
+	memcpy(list->states + list->listLength - 1, state, sizeof(DictState));
+
+	return list->states + list->listLength - 1;
+}
+
+static void
+DictStateListClear(DictStateList *list)
+{
+	list->listLength = 0;
+	if (list->states)
+		pfree(list->states);
+	list->states = NULL;
+}
+
+static bool
+LexemesBufferContains(LexemesBuffer *buffer, Oid dictId, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (buffer->data[i].dictId == dictId && buffer->data[i].token == token)
+			return true;
+
+	return false;
+}
+
+static TSLexeme *
+LexemesBufferGet(LexemesBuffer *buffer, Oid dictId, ParsedLex *token)
+{
+	int			i;
+	TSLexeme   *result = NULL;
+
+	for (i = 0; i < buffer->size; i++)
+		if (buffer->data[i].dictId == dictId && buffer->data[i].token == token)
+			result = buffer->data[i].data;
+
+	return result;
+}
+
+static void
+LexemesBufferRemove(LexemesBuffer *buffer, Oid dictId, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (buffer->data[i].dictId == dictId && buffer->data[i].token == token)
+			break;
+
+	if (i != buffer->size)
+	{
+		memcpy(buffer->data + i, buffer->data + i + 1, sizeof(LexemesBufferEntry) * (buffer->size - i - 1));
+		buffer->size--;
+		if (buffer->size == 0)
+			buffer->data = NULL;
+		else
+			buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	}
+}
+
+static void
+LexemesBufferAdd(LexemesBuffer *buffer, Oid dictId, ParsedLex *token, TSLexeme *data)
+{
+	LexemesBufferRemove(buffer, dictId, token);
+
+	buffer->size++;
+	if (buffer->data)
+		buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	else
+		buffer->data = palloc0(sizeof(LexemesBufferEntry) * buffer->size);
+
+	buffer->data[buffer->size - 1].token = token;
+	buffer->data[buffer->size - 1].dictId = dictId;
+	buffer->data[buffer->size - 1].data = data;
+}
+
+static void
+LexemesBufferClear(LexemesBuffer *buffer)
+{
+	buffer->size = 0;
+	if (buffer->data)
+		pfree(buffer->data);
+	buffer->data = NULL;
+}
+
+/*
+ * TSLexeme util functions
+ */
+
+static int
+TSLexemeGetSize(TSLexeme *lex)
+{
+	int			result = 0;
+	TSLexeme   *ptr = lex;
+
+	while (ptr && ptr->lexeme)
+	{
+		result++;
+		ptr++;
+	}
+
+	return result;
+}
+
+/*
+ * Remove same lexemes. Remove copies of whole nvariant groups.
+ */
+static TSLexeme *
+TSLexemeRemoveDuplications(TSLexeme *lexeme)
+{
+	TSLexeme   *res;
+	int			curLexIndex;
+	int			i;
+	int			lexemeSize = TSLexemeGetSize(lexeme);
+	int			shouldCopyCount = lexemeSize;
+	bool	   *shouldCopy;
+
+	if (lexeme == NULL)
+		return NULL;
+
+	shouldCopy = palloc(sizeof(bool) * lexemeSize);
+	memset(shouldCopy, true, sizeof(bool) * lexemeSize);
+
+	for (curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		for (i = curLexIndex + 1; i < lexemeSize; i++)
+		{
+			if (!shouldCopy[i])
+				continue;
+
+			if (strcmp(lexeme[curLexIndex].lexeme, lexeme[i].lexeme) == 0)
+			{
+				if (lexeme[curLexIndex].nvariant == lexeme[i].nvariant)
+				{
+					shouldCopy[i] = false;
+					shouldCopyCount--;
+					continue;
+				}
+				else
+				{
+					/*
+					 * Check for same set of lexemes in another nvariant
+					 * series
+					 */
+					int			nvariantCountL = 0;
+					int			nvariantCountR = 0;
+					int			nvariantOverlap = 1;
+					int			j;
+
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[curLexIndex].nvariant == lexeme[j].nvariant)
+							nvariantCountL++;
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[i].nvariant == lexeme[j].nvariant)
+							nvariantCountR++;
+
+					if (nvariantCountL != nvariantCountR)
+						continue;
+
+					for (j = 1; j < nvariantCountR; j++)
+					{
+						if (strcmp(lexeme[curLexIndex + j].lexeme, lexeme[i + j].lexeme) == 0
+							&& lexeme[curLexIndex + j].nvariant == lexeme[i + j].nvariant)
+							nvariantOverlap++;
+					}
+
+					if (nvariantOverlap != nvariantCountR)
+						continue;
+
+					for (j = 0; j < nvariantCountR; j++)
+					{
+						shouldCopy[i + j] = false;
+					}
+				}
+			}
+		}
+	}
+
+	res = palloc0(sizeof(TSLexeme) * (shouldCopyCount + 1));
+
+	for (i = 0, curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		if (shouldCopy[curLexIndex])
+		{
+			memcpy(res + i, lexeme + curLexIndex, sizeof(TSLexeme));
+			i++;
+		}
+	}
+
+	pfree(shouldCopy);
+	pfree(lexeme);
+	return res;
+}
+
+/*
+ * Combine two lexeme lists with respect to positions
+ */
+static TSLexeme *
+TSLexemeMergePositions(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+	int			left_i = 0;
+	int			right_i = 0;
+	int			left_max_nvariant = 0;
+	int			i;
+
+	if (left == NULL && right == NULL)
+	{
+		result = NULL;
+	}
+	else
+	{
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		for (i = 0; i < right_size; i++)
+			right[i].nvariant += left_max_nvariant;
+		if (right && right[0].flags & TSL_ADDPOS)
+			right[0].flags &= ~TSL_ADDPOS;
+
+		i = 0;
+		while (i < left_size + right_size)
+		{
+			if (left_i < left_size)
+			{
+				do
+				{
+					result[i++] = left[left_i++];
+				} while (left && left[left_i].lexeme && (left[left_i].flags & TSL_ADDPOS) == 0);
+			}
+			if (right_i < right_size)
+			{
+				do
+				{
+					result[i++] = right[right_i++];
+				} while (right && right[right_i].lexeme && (right[right_i].flags & TSL_ADDPOS) == 0);
+			}
+		}
+	}
+	return result;
+}
+
+/*
+ * Split lexemes generated by regular dictionaries and multi-input dictionaries
+ * and combine them with respect to positions
+ */
+static TSLexeme *
+TSLexemeFilterMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *result;
+	TSLexeme   *ptr = lexemes;
+	int			multi_lexemes = 0;
+
+	while (ptr && ptr->lexeme)
+	{
+		if (ptr->flags & TSL_MULTI)
+			multi_lexemes++;
+		ptr++;
+	}
+
+	if (multi_lexemes > 0)
+	{
+		TSLexeme   *lexemes_multi = palloc0(sizeof(TSLexeme) * (multi_lexemes + 1));
+		TSLexeme   *lexemes_rest = palloc0(sizeof(TSLexeme) * (TSLexemeGetSize(lexemes) - multi_lexemes + 1));
+		int			rest_i = 0;
+		int			multi_i = 0;
+
+		ptr = lexemes;
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr->flags & TSL_MULTI)
+				lexemes_multi[multi_i++] = *ptr;
+			else
+				lexemes_rest[rest_i++] = *ptr;
+
+			ptr++;
+		}
+		result = TSLexemeMergePositions(lexemes_rest, lexemes_multi);
+	}
+	else
+	{
+		result = TSLexemeMergePositions(lexemes, NULL);
+	}
+
+	return result;
+}
+
+/*
+ * Mark lexemes generated by multi-input (thesaurus-like) dictionary
+ */
+static void
+TSLexemeMarkMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *ptr = lexemes;
+
+	while (ptr && ptr->lexeme)
+	{
+		ptr->flags |= TSL_MULTI;
+		ptr++;
+	}
+}
+
+/*
+ * Lexemes set operations
+ */
+
+/*
+ * Combine left and right lexeme lists into one.
+ * If append is true, right lexemes added after last left lexeme with TSL_ADDPOS flag
+ */
+static TSLexeme *
+TSLexemeUnionOpt(TSLexeme *left, TSLexeme *right, bool append)
+{
+	TSLexeme   *result;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+	int			left_max_nvariant = 0;
+	int			i;
+
+	if (left == NULL && right == NULL)
+	{
+		result = NULL;
+	}
+	else
+	{
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		if (left_size > 0)
+			memcpy(result, left, sizeof(TSLexeme) * left_size);
+		if (right_size > 0)
+			memcpy(result + left_size, right, sizeof(TSLexeme) * right_size);
+		if (append && left_size > 0 && right_size > 0)
+			result[left_size].flags |= TSL_ADDPOS;
+
+		for (i = left_size; i < left_size + right_size; i++)
+			result[i].nvariant += left_max_nvariant;
+	}
+
+	return result;
+}
+
+static TSLexeme *
+TSLexemeUnion(TSLexeme *left, TSLexeme *right)
+{
+	return TSLexemeUnionOpt(left, right, false);
+}
+
+static TSLexeme *
+TSLexemeExcept(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+		{
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+		}
+
+		if (!found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+static TSLexeme *
+TSLexemeIntersect(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+		{
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+		}
+
+		if (found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*
+ * Result storage functions
+ */
+
+static void
+ResultStorageAdd(ResultStorage *storage, ParsedLex *token, TSLexeme *lexs)
+{
+	TSLexeme   *oldLexs = storage->lexemes;
+
+	storage->lexemes = TSLexemeUnionOpt(storage->lexemes, lexs, true);
+	if (oldLexs)
+		pfree(oldLexs);
+}
+
+static void
+ResultStorageMoveToAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+	{
+		TSLexeme   *prevAccepted = storage->accepted;
+
+		storage->accepted = TSLexemeUnionOpt(storage->accepted, storage->lexemes, true);
+		if (prevAccepted)
+			pfree(prevAccepted);
+		if (storage->lexemes)
+			pfree(storage->lexemes);
+	}
+	else
+	{
+		storage->accepted = storage->lexemes;
+	}
+	storage->lexemes = NULL;
+}
+
+static void
+ResultStorageClearLexemes(ResultStorage *storage)
+{
+	if (storage->lexemes)
+		pfree(storage->lexemes);
+	storage->lexemes = NULL;
+}
+
+static void
+ResultStorageClear(ResultStorage *storage)
+{
+	ResultStorageClearLexemes(storage);
+
+	if (storage->accepted)
+		pfree(storage->accepted);
+	storage->accepted = NULL;
+}
+
+/*
+ * Condition and command execution
+ */
+
+static TSLexeme *
+LexizeExecDictionary(LexizeData *ld, ParsedLex *token, Oid dictId)
+{
+	TSLexeme   *res;
+	TSDictionaryCacheEntry *dict;
+	DictSubState subState;
+
+	if (ld->skipDictionary == dictId)
+		return NULL;
+
+	if (LexemesBufferContains(&ld->buffer, dictId, token))
+	{
+		res = LexemesBufferGet(&ld->buffer, dictId, token);
+	}
+	else
+	{
+		char	   *curValLemm = token->lemm;
+		int			curValLenLemm = token->lenlemm;
+		DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+		dict = lookup_ts_dictionary_cache(dictId);
+
+		if (state)
+		{
+			subState = state->subState;
+			state->processed = true;
+		}
+		else
+		{
+			subState.isend = subState.getnext = false;
+			subState.private_state = NULL;
+		}
+
+		res = (TSLexeme *) DatumGetPointer(FunctionCall4(
+														 &(dict->lexize),
+														 PointerGetDatum(dict->dictData),
+														 PointerGetDatum(curValLemm),
+														 Int32GetDatum(curValLenLemm),
+														 PointerGetDatum(&subState)
+														 ));
+
+
+		if (subState.getnext)
+		{
+			/*
+			 * Dictionary wants next word, so store current context and state
+			 * in the DictStateList
+			 */
+			if (state == NULL)
+			{
+				state = palloc0(sizeof(DictState));
+				state->processed = true;
+				state->relatedDictionary = dictId;
+				state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				state->acceptedTokens.head = state->acceptedTokens.tail = NULL;
+				state->tmpResult = NULL;
+
+				/*
+				 * Add state to the list and update pointer in order to work
+				 * with copy from the list
+				 */
+				state = DictStateListAdd(&ld->dslist, state);
+			}
+
+			state->subState = subState;
+			state->storeToAccepted = res != NULL;
+
+			if (res)
+			{
+				if (state->intermediateTokens.head != NULL)
+				{
+					ParsedLex  *ptr = state->intermediateTokens.head;
+
+					while (ptr)
+					{
+						LPLAddTailCopy(&state->acceptedTokens, ptr);
+						ptr = ptr->next;
+					}
+					state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				}
+
+				if (state->tmpResult)
+					pfree(state->tmpResult);
+				TSLexemeMarkMulti(res);
+				state->tmpResult = res;
+				res = NULL;
+			}
+		}
+		else if (state != NULL)
+		{
+			if (res)
+			{
+				if (state)
+					TSLexemeMarkMulti(res);
+				DictStateListRemove(&ld->dslist, dictId);
+			}
+			else
+			{
+				/*
+				 * Trigger post-processing in order to check tmpResult and
+				 * restart processing (see LexizeExec function)
+				 */
+				state->processed = false;
+			}
+		}
+		LexemesBufferAdd(&ld->buffer, dictId, token, res);
+	}
+
+	return res;
+}
+
+static bool
+LexizeExecDictionaryWaitNext(LexizeData *ld, Oid dictId)
+{
+	DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+	if (state)
+		return state->subState.getnext;
+	else
+		return false;
+}
+
+static bool
+LexizeExecIsNull(LexizeData *ld, ParsedLex *token, Oid dictId)
+{
+	TSLexeme   *lexemes = LexizeExecDictionary(ld, token, dictId);
+
+	if (lexemes)
+		return false;
+	else
+		return !LexizeExecDictionaryWaitNext(ld, dictId);
+}
+
+static bool
+LexizeExecIsStop(LexizeData *ld, ParsedLex *token, Oid dictId)
+{
+	TSLexeme   *lex = LexizeExecDictionary(ld, token, dictId);
+
+	return lex != NULL && lex[0].lexeme == NULL;
+}
+
+static bool
+LexizeExecExpressionBool(LexizeData *ld, ParsedLex *token, TSMapExpression *expression)
+{
+	bool		result;
+
+	if (expression == NULL)
+		result = false;
+	else if (expression->is_true)
+		result = true;
+	else if (expression->dictionary != InvalidOid)
+	{
+		bool		is_null = LexizeExecIsNull(ld, token, expression->dictionary);
+		bool		is_stop = LexizeExecIsStop(ld, token, expression->dictionary);
+		bool		invert = (expression->options & DICTMAP_OPT_NOT) != 0;
+
+		result = true;
+		if ((expression->options & DICTMAP_OPT_IS_NULL) != 0)
+			result = result && (invert ? !is_null : is_null);
+		if ((expression->options & DICTMAP_OPT_IS_STOP) != 0)
+			result = result && (invert ? !is_stop : is_stop);
+	}
+	else
+	{
+		if (expression->operator == DICTMAP_OP_MAPBY)
+		{
+			TSLexeme   *mapby_result = LexizeExecMapBy(ld, token, expression->left, expression->right);
+			bool		is_null = mapby_result == NULL;
+			bool		is_stop = mapby_result != NULL && mapby_result[0].lexeme == NULL;
+			bool		invert = (expression->options & DICTMAP_OPT_NOT) != 0;
+
+			if (expression->left->dictionary != InvalidOid && LexizeExecDictionaryWaitNext(ld, expression->left->dictionary))
+				is_null = false;
+
+			result = true;
+			if ((expression->options & DICTMAP_OPT_IS_NULL) != 0)
+				result = result && (invert ? !is_null : is_null);
+			if ((expression->options & DICTMAP_OPT_IS_STOP) != 0)
+				result = result && (invert ? !is_stop : is_stop);
+		}
+		else
+		{
+			bool		res_left = LexizeExecExpressionBool(ld, token, expression->left);
+			bool		res_right = LexizeExecExpressionBool(ld, token, expression->right);
+
+			switch (expression->operator)
+			{
+				case DICTMAP_OP_NOT:
+					result = !res_right;
+					break;
+				case DICTMAP_OP_OR:
+					result = res_left || res_right;
+					break;
+				case DICTMAP_OP_AND:
+					result = res_left && res_right;
+					break;
+				default:
+					ereport(ERROR,
+							(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+							 errmsg("invalid text search configuration boolean expression")));
+					break;
+			}
+		}
+	}
+
+	return result;
+}
+
+static TSLexeme *
+LexizeExecExpressionSet(LexizeData *ld, ParsedLex *token, TSMapExpression *expression)
+{
+	TSLexeme   *result;
+
+	if (expression->dictionary != InvalidOid)
+	{
+		result = LexizeExecDictionary(ld, token, expression->dictionary);
+	}
+	else
+	{
+		if (expression->operator == DICTMAP_OP_MAPBY)
+		{
+			result = LexizeExecMapBy(ld, token, expression->left, expression->right);
+		}
+		else
+		{
+			TSLexeme   *res_left = LexizeExecExpressionSet(ld, token, expression->left);
+			TSLexeme   *res_right = LexizeExecExpressionSet(ld, token, expression->right);
+
+			switch (expression->operator)
+			{
+				case DICTMAP_OP_UNION:
+					result = TSLexemeUnion(res_left, res_right);
+					break;
+				case DICTMAP_OP_EXCEPT:
+					result = TSLexemeExcept(res_left, res_right);
+					break;
+				case DICTMAP_OP_INTERSECT:
+					result = TSLexemeIntersect(res_left, res_right);
+					break;
+				default:
+					ereport(ERROR,
+							(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+							 errmsg("invalid text search configuration result set expression")));
+					result = NULL;
+					break;
+			}
+		}
+	}
+
+	return result;
+}
 
-typedef struct
+static TSLexeme *
+LexizeExecMapBy(LexizeData *ld, ParsedLex *token, TSMapExpression *left, TSMapExpression *right)
 {
-	TSConfigCacheEntry *cfg;
-	Oid			curDictId;
-	int			posDict;
-	DictSubState dictState;
-	ParsedLex  *curSub;
-	ListParsedLex towork;		/* current list to work */
-	ListParsedLex waste;		/* list of lexemes that already lexized */
+	TSLexeme   *right_res = LexizeExecExpressionSet(ld, token, right);
+	TSLexeme   *result = NULL;
+	int			right_size = TSLexemeGetSize(right_res);
+	int			i;
 
-	/*
-	 * fields to store last variant to lexize (basically, thesaurus or similar
-	 * to, which wants	several lexemes
-	 */
+	if (right_res == NULL)
+		return LexizeExecExpressionSet(ld, token, left);
 
-	ParsedLex  *lastRes;
-	TSLexeme   *tmpRes;
-} LexizeData;
+	for (i = 0; i < right_size; i++)
+	{
+		TSLexeme   *tmp_res = NULL;
+		TSLexeme   *prev_res;
+		ParsedLex	tmp_token;
+
+		tmp_token.lemm = right_res[i].lexeme;
+		tmp_token.lenlemm = strlen(right_res[i].lexeme);
+		tmp_token.type = token->type;
+		tmp_token.next = NULL;
+
+		tmp_res = LexizeExecExpressionSet(ld, &tmp_token, left);
+		prev_res = result;
+		result = TSLexemeUnion(prev_res, tmp_res);
+		if (prev_res)
+			pfree(prev_res);
+	}
 
-static void
-LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
-{
-	ld->cfg = cfg;
-	ld->curDictId = InvalidOid;
-	ld->posDict = 0;
-	ld->towork.head = ld->towork.tail = ld->curSub = NULL;
-	ld->waste.head = ld->waste.tail = NULL;
-	ld->lastRes = NULL;
-	ld->tmpRes = NULL;
+	return result;
 }
 
-static void
-LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
+static TSLexeme *
+LexizeExecCase(LexizeData *ld, ParsedLex *originalToken, TSMapRuleList *rules, TSMapRule **selectedRule)
 {
-	if (list->tail)
+	TSLexeme   *res = NULL;
+	ParsedLex	token = *originalToken;
+
+	if (ld->cfg->lenmap <= token.type || rules == NULL)
 	{
-		list->tail->next = newpl;
-		list->tail = newpl;
+		res = NULL;
 	}
 	else
-		list->head = list->tail = newpl;
-	newpl->next = NULL;
-}
-
-static ParsedLex *
-LPLRemoveHead(ListParsedLex *list)
-{
-	ParsedLex  *res = list->head;
+	{
+		int			i;
 
-	if (list->head)
-		list->head = list->head->next;
+		for (i = 0; i < rules->count; i++)
+		{
+			if (rules->data[i].dictionary != InvalidOid)
+			{
+				/* Comma-separated syntax configuration */
+				res = LexizeExecDictionary(ld, &token, rules->data[i].dictionary);
+				if (!LexizeExecIsNull(ld, &token, rules->data[i].dictionary))
+				{
+					if (selectedRule)
+						*selectedRule = rules->data + i;
+					originalToken->relatedRule = rules->data + i;
+
+					if (res && (res[0].flags & TSL_FILTER))
+					{
+						token.lemm = res[0].lexeme;
+						token.lenlemm = strlen(res[0].lexeme);
+					}
+					else
+					{
+						break;
+					}
+				}
+			}
+			else if (LexizeExecExpressionBool(ld, &token, rules->data[i].condition.expression))
+			{
+				if (selectedRule)
+					*selectedRule = rules->data + i;
+				originalToken->relatedRule = rules->data + i;
 
-	if (list->head == NULL)
-		list->tail = NULL;
+				if (rules->data[i].command.is_expression)
+					res = LexizeExecExpressionSet(ld, &token, rules->data[i].command.expression);
+				else
+					res = LexizeExecCase(ld, &token, rules->data[i].command.ruleList, selectedRule);
+				break;
+			}
+		}
+	}
 
 	return res;
 }
 
-static void
-LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
+/*
+ * LexizeExec and helpers functions
+ */
+
+static TSLexeme *
+LexizeExecFinishProcessing(LexizeData *ld)
 {
-	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+	int			i;
+	TSLexeme   *res = NULL;
 
-	newpl->type = type;
-	newpl->lemm = lemm;
-	newpl->lenlemm = lenlemm;
-	LPLAddTail(&ld->towork, newpl);
-	ld->curSub = ld->towork.tail;
-}
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		TSLexeme   *last_res = res;
 
-static void
-RemoveHead(LexizeData *ld)
-{
-	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+		res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+		if (last_res)
+			pfree(last_res);
+	}
 
-	ld->posDict = 0;
+	return res;
 }
 
-static void
-setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+static TSLexeme *
+LexizeExecGetPreviousResults(LexizeData *ld)
 {
-	if (correspondLexem)
-	{
-		*correspondLexem = ld->waste.head;
-	}
-	else
-	{
-		ParsedLex  *tmp,
-				   *ptr = ld->waste.head;
+	int			i;
+	TSLexeme   *res = NULL;
 
-		while (ptr)
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		if (!ld->dslist.states[i].processed)
 		{
-			tmp = ptr->next;
-			pfree(ptr);
-			ptr = tmp;
+			TSLexeme   *last_res = res;
+
+			res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+			if (last_res)
+				pfree(last_res);
 		}
 	}
-	ld->waste.head = ld->waste.tail = NULL;
+
+	return res;
 }
 
 static void
-moveToWaste(LexizeData *ld, ParsedLex *stop)
+LexizeExecClearDictStates(LexizeData *ld)
 {
-	bool		go = true;
+	int			i;
 
-	while (ld->towork.head && go)
+	for (i = 0; i < ld->dslist.listLength; i++)
 	{
-		if (ld->towork.head == stop)
+		if (!ld->dslist.states[i].processed)
 		{
-			ld->curSub = stop->next;
-			go = false;
+			DictStateListRemove(&ld->dslist, ld->dslist.states[i].relatedDictionary);
+			i = 0;
 		}
-		RemoveHead(ld);
 	}
 }
 
-static void
-setNewTmpRes(LexizeData *ld, ParsedLex *lex, TSLexeme *res)
+static bool
+LexizeExecNotProcessedDictStates(LexizeData *ld)
 {
-	if (ld->tmpRes)
-	{
-		TSLexeme   *ptr;
+	int			i;
 
-		for (ptr = ld->tmpRes; ptr->lexeme; ptr++)
-			pfree(ptr->lexeme);
-		pfree(ld->tmpRes);
-	}
-	ld->tmpRes = res;
-	ld->lastRes = lex;
+	for (i = 0; i < ld->dslist.listLength; i++)
+		if (!ld->dslist.states[i].processed)
+			return true;
+
+	return false;
 }
 
 static TSLexeme *
-LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
+LexizeExec(LexizeData *ld, ParsedLex **correspondLexem, TSMapRule **selectedRule)
 {
+	ParsedLex  *token;
+	TSMapRuleList *rules;
+	TSLexeme   *res = NULL;
+	TSLexeme   *prevIterationResult = NULL;
+	bool		removeHead = false;
+	bool		resetSkipDictionary = false;
+	bool		accepted = false;
 	int			i;
-	ListDictionary *map;
-	TSDictionaryCacheEntry *dict;
-	TSLexeme   *res;
 
-	if (ld->curDictId == InvalidOid)
-	{
-		/*
-		 * usual mode: dictionary wants only one word, but we should keep in
-		 * mind that we should go through all stack
-		 */
+	for (i = 0; i < ld->dslist.listLength; i++)
+		ld->dslist.states[i].processed = false;
+	if (ld->skipDictionary != InvalidOid)
+		resetSkipDictionary = true;
 
-		while (ld->towork.head)
+	token = ld->towork.head;
+	if (token == NULL)
+	{
+		setCorrLex(ld, correspondLexem);
+		return NULL;
+	}
+	else
+	{
+		rules = ld->cfg->map[token->type];
+		if (rules != NULL)
 		{
-			ParsedLex  *curVal = ld->towork.head;
-			char	   *curValLemm = curVal->lemm;
-			int			curValLenLemm = curVal->lenlemm;
-
-			map = ld->cfg->map + curVal->type;
-
-			if (curVal->type == 0 || curVal->type >= ld->cfg->lenmap || map->len == 0)
+			res = LexizeExecCase(ld, token, rules, selectedRule);
+			prevIterationResult = LexizeExecGetPreviousResults(ld);
+			removeHead = prevIterationResult == NULL;
+		}
+		else
+		{
+			removeHead = true;
+			if (token->type == 0)	/* Processing EOF-like token */
 			{
-				/* skip this type of lexeme */
-				RemoveHead(ld);
-				continue;
+				res = LexizeExecFinishProcessing(ld);
+				prevIterationResult = NULL;
 			}
+		}
+
+		if (LexizeExecNotProcessedDictStates(ld) && (token->type == 0 || rules != NULL))	/* Rollback processing */
+		{
+			int			i;
+			ListParsedLex *intermediateTokens = NULL;
+			ListParsedLex *acceptedTokens = NULL;
 
-			for (i = ld->posDict; i < map->len; i++)
+			for (i = 0; i < ld->dslist.listLength; i++)
 			{
-				dict = lookup_ts_dictionary_cache(map->dictIds[i]);
-
-				ld->dictState.isend = ld->dictState.getnext = false;
-				ld->dictState.private_state = NULL;
-				res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-																 &(dict->lexize),
-																 PointerGetDatum(dict->dictData),
-																 PointerGetDatum(curValLemm),
-																 Int32GetDatum(curValLenLemm),
-																 PointerGetDatum(&ld->dictState)
-																 ));
-
-				if (ld->dictState.getnext)
+				if (!ld->dslist.states[i].processed)
 				{
-					/*
-					 * dictionary wants next word, so setup and store current
-					 * position and go to multiword mode
-					 */
-
-					ld->curDictId = DatumGetObjectId(map->dictIds[i]);
-					ld->posDict = i + 1;
-					ld->curSub = curVal->next;
-					if (res)
-						setNewTmpRes(ld, curVal, res);
-					return LexizeExec(ld, correspondLexem);
+					intermediateTokens = &ld->dslist.states[i].intermediateTokens;
+					acceptedTokens = &ld->dslist.states[i].acceptedTokens;
+					if (prevIterationResult == NULL)
+						ld->skipDictionary = ld->dslist.states[i].relatedDictionary;
 				}
+			}
 
-				if (!res)		/* dictionary doesn't know this lexeme */
-					continue;
-
-				if (res->flags & TSL_FILTER)
+			if (intermediateTokens && intermediateTokens->head)
+			{
+				ParsedLex  *head = ld->towork.head;
+
+				ld->towork.head = intermediateTokens->head;
+				intermediateTokens->tail->next = head;
+				head->next = NULL;
+				ld->towork.tail = head;
+				removeHead = false;
+				LPLClear(&ld->waste);
+				if (acceptedTokens && acceptedTokens->head)
 				{
-					curValLemm = res->lexeme;
-					curValLenLemm = strlen(res->lexeme);
-					continue;
+					ld->waste.head = acceptedTokens->head;
+					ld->waste.tail = acceptedTokens->tail;
 				}
-
-				RemoveHead(ld);
-				setCorrLex(ld, correspondLexem);
-				return res;
 			}
-
-			RemoveHead(ld);
+			ResultStorageClearLexemes(&ld->delayedResults);
+			if (rules != NULL)
+				res = NULL;
 		}
+
+		if (rules != NULL)
+			LexizeExecClearDictStates(ld);
+		else if (token->type == 0)
+			DictStateListClear(&ld->dslist);
 	}
-	else
-	{							/* curDictId is valid */
-		dict = lookup_ts_dictionary_cache(ld->curDictId);
 
-		/*
-		 * Dictionary ld->curDictId asks  us about following words
-		 */
+	if (prevIterationResult)
+	{
+		res = prevIterationResult;
+	}
+	else
+	{
+		int			i;
 
-		while (ld->curSub)
+		for (i = 0; i < ld->dslist.listLength; i++)
 		{
-			ParsedLex  *curVal = ld->curSub;
-
-			map = ld->cfg->map + curVal->type;
-
-			if (curVal->type != 0)
+			if (ld->dslist.states[i].storeToAccepted)
 			{
-				bool		dictExists = false;
-
-				if (curVal->type >= ld->cfg->lenmap || map->len == 0)
-				{
-					/* skip this type of lexeme */
-					ld->curSub = curVal->next;
-					continue;
-				}
-
-				/*
-				 * We should be sure that current type of lexeme is recognized
-				 * by our dictionary: we just check is it exist in list of
-				 * dictionaries ?
-				 */
-				for (i = 0; i < map->len && !dictExists; i++)
-					if (ld->curDictId == DatumGetObjectId(map->dictIds[i]))
-						dictExists = true;
-
-				if (!dictExists)
-				{
-					/*
-					 * Dictionary can't work with current tpe of lexeme,
-					 * return to basic mode and redo all stored lexemes
-					 */
-					ld->curDictId = InvalidOid;
-					return LexizeExec(ld, correspondLexem);
-				}
+				LPLAddTailCopy(&ld->dslist.states[i].acceptedTokens, token);
+				accepted = true;
+				ld->dslist.states[i].storeToAccepted = false;
 			}
-
-			ld->dictState.isend = (curVal->type == 0) ? true : false;
-			ld->dictState.getnext = false;
-
-			res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-															 &(dict->lexize),
-															 PointerGetDatum(dict->dictData),
-															 PointerGetDatum(curVal->lemm),
-															 Int32GetDatum(curVal->lenlemm),
-															 PointerGetDatum(&ld->dictState)
-															 ));
-
-			if (ld->dictState.getnext)
+			else
 			{
-				/* Dictionary wants one more */
-				ld->curSub = curVal->next;
-				if (res)
-					setNewTmpRes(ld, curVal, res);
-				continue;
+				LPLAddTailCopy(&ld->dslist.states[i].intermediateTokens, token);
 			}
+		}
+	}
 
-			if (res || ld->tmpRes)
-			{
-				/*
-				 * Dictionary normalizes lexemes, so we remove from stack all
-				 * used lexemes, return to basic mode and redo end of stack
-				 * (if it exists)
-				 */
-				if (res)
-				{
-					moveToWaste(ld, ld->curSub);
-				}
-				else
-				{
-					res = ld->tmpRes;
-					moveToWaste(ld, ld->lastRes);
-				}
+	if (removeHead)
+		RemoveHead(ld);
 
-				/* reset to initial state */
-				ld->curDictId = InvalidOid;
-				ld->posDict = 0;
-				ld->lastRes = NULL;
-				ld->tmpRes = NULL;
-				setCorrLex(ld, correspondLexem);
-				return res;
-			}
+	if (ld->dslist.listLength > 0)
+	{
+		/*
+		 * There is at least one thesaurus dictionary in the middle of
+		 * processing. Delay return of the result to avoid wrong lexemes in
+		 * case of thesaurus phrase rejection.
+		 */
+		ResultStorageAdd(&ld->delayedResults, token, res);
+		if (accepted)
+			ResultStorageMoveToAccepted(&ld->delayedResults);
+		if (res)
+			pfree(res);
+		res = NULL;
+	}
+	else
+	{
+		if (ld->towork.head == NULL)
+		{
+			TSLexeme   *oldAccepted = ld->delayedResults.accepted;
 
-			/*
-			 * Dict don't want next lexem and didn't recognize anything, redo
-			 * from ld->towork.head
-			 */
-			ld->curDictId = InvalidOid;
-			return LexizeExec(ld, correspondLexem);
+			ld->delayedResults.accepted = TSLexemeUnionOpt(ld->delayedResults.accepted, ld->delayedResults.lexemes, true);
+			if (oldAccepted)
+				pfree(oldAccepted);
+		}
+
+		/*
+		 * Add accepted delayed results to the output of the parsing. All
+		 * lexemes returned during thesaurus pharse processing should be
+		 * returned simultaniously, since all phrase tokens are processed as
+		 * one.
+		 */
+		if (ld->delayedResults.accepted != NULL)
+		{
+			TSLexeme   *oldRes = res;
+
+			res = TSLexemeUnionOpt(ld->delayedResults.accepted, res, prevIterationResult == NULL);
+			if (oldRes)
+				pfree(oldRes);
+			ResultStorageClear(&ld->delayedResults);
 		}
+		setCorrLex(ld, correspondLexem);
 	}
 
-	setCorrLex(ld, correspondLexem);
-	return NULL;
+	if (resetSkipDictionary)
+		ld->skipDictionary = InvalidOid;
+
+	LexemesBufferClear(&ld->buffer);
+	res = TSLexemeFilterMulti(res);
+	if (res)
+		res = TSLexemeRemoveDuplications(res);
+
+	return res;
 }
 
 /*
+ * ts_parse API functions
+ */
+
+/*
  * Parse string and lexize words.
  *
  * prs will be filled in.
@@ -357,7 +1329,7 @@ LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 void
 parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
@@ -375,36 +1347,42 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
-		while ((norms = LexizeExec(&ldata, NULL)) != NULL)
+		while ((norms = LexizeExec(&ldata, NULL, NULL)) != NULL)
 		{
-			TSLexeme   *ptr = norms;
+			TSLexeme   *ptr;
+
+			ptr = norms;
 
 			prs->pos++;			/* set pos */
 
@@ -429,12 +1407,200 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 			}
 			pfree(norms);
 		}
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
 
 /*
+ * Initialize SRF context and text parser for ts_debug execution.
+ */
+static void
+ts_debug_init(Oid cfgId, text *inputText, FunctionCallInfo fcinfo)
+{
+	TupleDesc	tupdesc;
+	char	   *buf;
+	int			buflen;
+	FuncCallContext *funcctx;
+	MemoryContext oldcontext;
+	TSDebugContext *context;
+
+	funcctx = SRF_FIRSTCALL_INIT();
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+	buf = text_to_cstring(inputText);
+	buflen = strlen(buf);
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("function returning record called in context "
+						"that cannot accept type record")));
+
+	funcctx->user_fctx = palloc0(sizeof(TSDebugContext));
+	funcctx->attinmeta = TupleDescGetAttInMetadata(tupdesc);
+
+	context = funcctx->user_fctx;
+	context->cfg = lookup_ts_config_cache(cfgId);
+	context->prsobj = lookup_ts_parser_cache(context->cfg->prsId);
+
+	context->tokenTypes = (LexDescr *) DatumGetPointer(OidFunctionCall1(context->prsobj->lextypeOid,
+																		(Datum) 0));
+
+	context->prsdata = (void *) DatumGetPointer(FunctionCall2(&context->prsobj->prsstart,
+															  PointerGetDatum(buf),
+															  Int32GetDatum(buflen)));
+	LexizeInit(&context->ldata, context->cfg);
+	context->tokentype = 1;
+
+	MemoryContextSwitchTo(oldcontext);
+}
+
+/*
+ * Get one token from input text and add it to towork queue.
+ */
+static void
+ts_debug_get_token(FuncCallContext *funcctx)
+{
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+	int			lenlemm;
+	char	   *lemm = NULL;
+
+	context = funcctx->user_fctx;
+
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+	context->tokentype = DatumGetInt32(FunctionCall3(&(context->prsobj->prstoken),
+													 PointerGetDatum(context->prsdata),
+													 PointerGetDatum(&lemm),
+													 PointerGetDatum(&lenlemm)));
+
+	if (context->tokentype > 0 && lenlemm >= MAXSTRLEN)
+	{
+#ifdef IGNORE_LONGLEXEME
+		ereport(NOTICE,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#else
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#endif
+	}
+
+	LexizeAddLemm(&context->ldata, context->tokentype, lemm, lenlemm);
+	MemoryContextSwitchTo(oldcontext);
+}
+
+/*
+ * Parse text and print debug information for each token, such as
+ * token type, dictionary map configuration, selected command and lexemes.
+ * Arguments: regconfiguration(Oid) cfgId, text *inputText
+ */
+Datum
+ts_debug(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		Oid			cfgId = PG_GETARG_OID(0);
+		text	   *inputText = PG_GETARG_TEXT_P(1);
+
+		ts_debug_init(cfgId, inputText, fcinfo);
+	}
+
+	funcctx = SRF_PERCALL_SETUP();
+	context = funcctx->user_fctx;
+
+	while (context->tokentype > 0 && context->leftTokens == NULL)
+	{
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+		ts_debug_get_token(funcctx);
+
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens), &(context->rule));
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	while (context->leftTokens == NULL && context->ldata.towork.head != NULL)
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens), &(context->rule));
+
+	if (context->leftTokens && context->leftTokens && context->leftTokens->type > 0)
+	{
+		HeapTuple	tuple;
+		Datum		result;
+		char	  **values;
+		ParsedLex  *lex = context->leftTokens;
+		StringInfo	str = NULL;
+		TSLexeme   *ptr;
+
+		values = palloc0(sizeof(char *) * 6);
+		str = makeStringInfo();
+		initStringInfo(str);
+
+		values[0] = context->tokenTypes[lex->type - 1].alias;
+		values[1] = context->tokenTypes[lex->type - 1].descr;
+
+		values[2] = palloc0(sizeof(char) * (lex->lenlemm + 1));
+		memcpy(values[2], lex->lemm, sizeof(char) * lex->lenlemm);
+
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			TSMapPrintRuleList(context->ldata.cfg->map[lex->type], str, 0);
+			values[3] = str->data;
+			str = makeStringInfo();
+			initStringInfo(str);
+
+			if (lex->relatedRule)
+			{
+				TSMapPrintRule(lex->relatedRule, str, 0);
+				values[4] = str->data;
+				str = makeStringInfo();
+				initStringInfo(str);
+			}
+		}
+
+		ptr = context->savedLexemes;
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '{');
+
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr != context->savedLexemes)
+				appendStringInfoString(str, ", ");
+			appendStringInfoString(str, ptr->lexeme);
+			ptr++;
+		}
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '}');
+		if (context->savedLexemes)
+			values[5] = str->data;
+		else
+			values[5] = NULL;
+
+		tuple = BuildTupleFromCStrings(funcctx->attinmeta, values);
+		result = HeapTupleGetDatum(tuple);
+
+		context->leftTokens = lex->next;
+		pfree(lex);
+		if (context->leftTokens == NULL && context->savedLexemes)
+			pfree(context->savedLexemes);
+
+		SRF_RETURN_NEXT(funcctx, result);
+	}
+
+	FunctionCall1(&(context->prsobj->prsend), PointerGetDatum(context->prsdata));
+	SRF_RETURN_DONE(funcctx);
+}
+
+/*
  * Headline framework
  */
 static void
@@ -532,12 +1698,12 @@ addHLParsedLex(HeadlineParsedText *prs, TSQuery query, ParsedLex *lexs, TSLexeme
 void
 hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
 	TSLexeme   *norms;
-	ParsedLex  *lexs;
+	ParsedLex  *lexs = NULL;
 	TSConfigCacheEntry *cfg;
 	TSParserCacheEntry *prsobj;
 	void	   *prsdata;
@@ -551,45 +1717,50 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		do
 		{
-			if ((norms = LexizeExec(&ldata, &lexs)) != NULL)
+			if ((norms = LexizeExec(&ldata, &lexs, NULL)) != NULL)
 			{
 				prs->vectorpos++;
 				addHLParsedLex(prs, query, lexs, norms);
 			}
 			else
 				addHLParsedLex(prs, query, lexs, NULL);
+			lexs = NULL;
 		} while (norms);
 
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
@@ -642,14 +1813,14 @@ generateHeadline(HeadlineParsedText *prs)
 			}
 			else if (!wrd->skip)
 			{
-				if (wrd->selected)
+				if (wrd->selected && (wrd == prs->words || !(wrd - 1)->selected))
 				{
 					memcpy(ptr, prs->startsel, prs->startsellen);
 					ptr += prs->startsellen;
 				}
 				memcpy(ptr, wrd->word, wrd->len);
 				ptr += wrd->len;
-				if (wrd->selected)
+				if (wrd->selected && ((wrd + 1 - prs->words) == prs->curwords || !(wrd + 1)->selected))
 				{
 					memcpy(ptr, prs->stopsel, prs->stopsellen);
 					ptr += prs->stopsellen;
diff --git a/src/backend/tsearch/ts_utils.c b/src/backend/tsearch/ts_utils.c
index 56d4cf0..3868b3c 100644
--- a/src/backend/tsearch/ts_utils.c
+++ b/src/backend/tsearch/ts_utils.c
@@ -19,7 +19,17 @@
 #include "miscadmin.h"
 #include "tsearch/ts_locale.h"
 #include "tsearch/ts_utils.h"
-
+#include "catalog/indexing.h"
+#include "catalog/pg_ts_config_map.h"
+#include "catalog/pg_ts_dict.h"
+#include "storage/lockdefs.h"
+#include "access/heapam.h"
+#include "access/genam.h"
+#include "access/htup_details.h"
+#include "access/sysattr.h"
+#include "utils/fmgroids.h"
+#include "utils/builtins.h"
+#include "tsearch/ts_cache.h"
 
 /*
  * Given the base name and extension of a tsearch config file, return
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 888edbb..0628b9c 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -828,11 +828,10 @@ static const struct cachedesc cacheinfo[] = {
 	},
 	{TSConfigMapRelationId,		/* TSCONFIGMAP */
 		TSConfigMapIndexId,
-		3,
+		2,
 		{
 			Anum_pg_ts_config_map_mapcfg,
 			Anum_pg_ts_config_map_maptokentype,
-			Anum_pg_ts_config_map_mapseqno,
 			0
 		},
 		2
diff --git a/src/backend/utils/cache/ts_cache.c b/src/backend/utils/cache/ts_cache.c
index da5c8ea..da18387 100644
--- a/src/backend/utils/cache/ts_cache.c
+++ b/src/backend/utils/cache/ts_cache.c
@@ -39,10 +39,13 @@
 #include "catalog/pg_ts_template.h"
 #include "commands/defrem.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/catcache.h"
 #include "utils/fmgroids.h"
 #include "utils/inval.h"
+#include "utils/jsonb.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/regproc.h"
@@ -51,13 +54,12 @@
 
 
 /*
- * MAXTOKENTYPE/MAXDICTSPERTT are arbitrary limits on the workspace size
+ * MAXTOKENTYPE is arbitrary limits on the workspace size
  * used in lookup_ts_config_cache().  We could avoid hardwiring a limit
  * by making the workspace dynamically enlargeable, but it seems unlikely
  * to be worth the trouble.
  */
-#define MAXTOKENTYPE	256
-#define MAXDICTSPERTT	100
+#define MAXTOKENTYPE		256
 
 
 static HTAB *TSParserCacheHash = NULL;
@@ -414,11 +416,10 @@ lookup_ts_config_cache(Oid cfgId)
 		ScanKeyData mapskey;
 		SysScanDesc mapscan;
 		HeapTuple	maptup;
-		ListDictionary maplists[MAXTOKENTYPE + 1];
-		Oid			mapdicts[MAXDICTSPERTT];
+		TSMapRuleList *mapruleslist[MAXTOKENTYPE + 1];
 		int			maxtokentype;
-		int			ndicts;
 		int			i;
+		TSMapRuleList *rules_tmp;
 
 		tp = SearchSysCache1(TSCONFIGOID, ObjectIdGetDatum(cfgId));
 		if (!HeapTupleIsValid(tp))
@@ -449,8 +450,10 @@ lookup_ts_config_cache(Oid cfgId)
 			if (entry->map)
 			{
 				for (i = 0; i < entry->lenmap; i++)
-					if (entry->map[i].dictIds)
-						pfree(entry->map[i].dictIds);
+				{
+					if (entry->map[i])
+						TSMapFree(entry->map[i]);
+				}
 				pfree(entry->map);
 			}
 		}
@@ -464,13 +467,11 @@ lookup_ts_config_cache(Oid cfgId)
 		/*
 		 * Scan pg_ts_config_map to gather dictionary list for each token type
 		 *
-		 * Because the index is on (mapcfg, maptokentype, mapseqno), we will
-		 * see the entries in maptokentype order, and in mapseqno order for
-		 * each token type, even though we didn't explicitly ask for that.
+		 * Because the index is on (mapcfg, maptokentype), we will see the
+		 * entries in maptokentype order even though we didn't explicitly ask
+		 * for that.
 		 */
-		MemSet(maplists, 0, sizeof(maplists));
 		maxtokentype = 0;
-		ndicts = 0;
 
 		ScanKeyInit(&mapskey,
 					Anum_pg_ts_config_map_mapcfg,
@@ -482,6 +483,7 @@ lookup_ts_config_cache(Oid cfgId)
 		mapscan = systable_beginscan_ordered(maprel, mapidx,
 											 NULL, 1, &mapskey);
 
+		memset(mapruleslist, 0, sizeof(mapruleslist));
 		while ((maptup = systable_getnext_ordered(mapscan, ForwardScanDirection)) != NULL)
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
@@ -491,51 +493,27 @@ lookup_ts_config_cache(Oid cfgId)
 				elog(ERROR, "maptokentype value %d is out of range", toktype);
 			if (toktype < maxtokentype)
 				elog(ERROR, "maptokentype entries are out of order");
-			if (toktype > maxtokentype)
-			{
-				/* starting a new token type, but first save the prior data */
-				if (ndicts > 0)
-				{
-					maplists[maxtokentype].len = ndicts;
-					maplists[maxtokentype].dictIds = (Oid *)
-						MemoryContextAlloc(CacheMemoryContext,
-										   sizeof(Oid) * ndicts);
-					memcpy(maplists[maxtokentype].dictIds, mapdicts,
-						   sizeof(Oid) * ndicts);
-				}
-				maxtokentype = toktype;
-				mapdicts[0] = cfgmap->mapdict;
-				ndicts = 1;
-			}
-			else
-			{
-				/* continuing data for current token type */
-				if (ndicts >= MAXDICTSPERTT)
-					elog(ERROR, "too many pg_ts_config_map entries for one token type");
-				mapdicts[ndicts++] = cfgmap->mapdict;
-			}
+
+			maxtokentype = toktype;
+			rules_tmp = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			mapruleslist[maxtokentype] = TSMapMoveToMemoryContext(rules_tmp, CacheMemoryContext);
+			TSMapFree(rules_tmp);
+			rules_tmp = NULL;
 		}
 
 		systable_endscan_ordered(mapscan);
 		index_close(mapidx, AccessShareLock);
 		heap_close(maprel, AccessShareLock);
 
-		if (ndicts > 0)
+		if (maxtokentype > 0)
 		{
-			/* save the last token type's dictionaries */
-			maplists[maxtokentype].len = ndicts;
-			maplists[maxtokentype].dictIds = (Oid *)
-				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(Oid) * ndicts);
-			memcpy(maplists[maxtokentype].dictIds, mapdicts,
-				   sizeof(Oid) * ndicts);
-			/* and save the overall map */
+			/* save the overall map */
 			entry->lenmap = maxtokentype + 1;
-			entry->map = (ListDictionary *)
+			entry->map = (TSMapRuleList * *)
 				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(ListDictionary) * entry->lenmap);
-			memcpy(entry->map, maplists,
-				   sizeof(ListDictionary) * entry->lenmap);
+								   sizeof(TSMapRuleList *) * entry->lenmap);
+			memcpy(entry->map, mapruleslist,
+				   sizeof(TSMapRuleList *) * entry->lenmap);
 		}
 
 		entry->isvalid = true;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 8733426..ceff4d1 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -14186,10 +14186,11 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 					  "SELECT\n"
 					  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
 					  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
-					  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
+					  "  dictionary_map_to_text(m.mapcfg, m.maptokentype) AS dictname\n"
 					  "FROM pg_catalog.pg_ts_config_map AS m\n"
 					  "WHERE m.mapcfg = '%u'\n"
-					  "ORDER BY m.mapcfg, m.maptokentype, m.mapseqno",
+					  "GROUP BY m.mapcfg, m.maptokentype\n"
+					  "ORDER BY m.mapcfg, m.maptokentype",
 					  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -14203,20 +14204,14 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 		char	   *tokenname = PQgetvalue(res, i, i_tokenname);
 		char	   *dictname = PQgetvalue(res, i, i_dictname);
 
-		if (i == 0 ||
-			strcmp(tokenname, PQgetvalue(res, i - 1, i_tokenname)) != 0)
-		{
-			/* starting a new token type, so start a new command */
-			if (i > 0)
-				appendPQExpBufferStr(q, ";\n");
-			appendPQExpBuffer(q, "\nALTER TEXT SEARCH CONFIGURATION %s\n",
-							  fmtId(cfginfo->dobj.name));
-			/* tokenname needs quoting, dictname does NOT */
-			appendPQExpBuffer(q, "    ADD MAPPING FOR %s WITH %s",
-							  fmtId(tokenname), dictname);
-		}
-		else
-			appendPQExpBuffer(q, ", %s", dictname);
+		/* starting a new token type, so start a new command */
+		if (i > 0)
+			appendPQExpBufferStr(q, ";\n");
+		appendPQExpBuffer(q, "\nALTER TEXT SEARCH CONFIGURATION %s\n",
+						  fmtId(cfginfo->dobj.name));
+		/* tokenname needs quoting, dictname does NOT */
+		appendPQExpBuffer(q, "    ADD MAPPING FOR %s WITH \n%s",
+						  fmtId(tokenname), dictname);
 	}
 
 	if (ntups > 0)
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 0688571..98f000b 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -4580,13 +4580,7 @@ describeOneTSConfig(const char *oid, const char *nspname, const char *cfgname,
 					  "  ( SELECT t.alias FROM\n"
 					  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
 					  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
-					  "  pg_catalog.btrim(\n"
-					  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
-					  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
-					  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
-					  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
-					  "    ) :: pg_catalog.text,\n"
-					  "  '{}') AS \"%s\"\n"
+					  " dictionary_map_to_text(m.mapcfg, m.maptokentype) AS \"%s\"\n"
 					  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
 					  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
 					  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h
index 9a7f5b2..362fd17 100644
--- a/src/include/catalog/catversion.h
+++ b/src/include/catalog/catversion.h
@@ -53,6 +53,6 @@
  */
 
 /*							yyyymmddN */
-#define CATALOG_VERSION_NO	201710161
+#define CATALOG_VERSION_NO	201710181
 
 #endif
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index ef84936..db487cf 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -260,7 +260,7 @@ DECLARE_UNIQUE_INDEX(pg_ts_config_cfgname_index, 3608, on pg_ts_config using btr
 DECLARE_UNIQUE_INDEX(pg_ts_config_oid_index, 3712, on pg_ts_config using btree(oid oid_ops));
 #define TSConfigOidIndexId	3712
 
-DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops, mapseqno int4_ops));
+DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops));
 #define TSConfigMapIndexId	3609
 
 DECLARE_UNIQUE_INDEX(pg_ts_dict_dictname_index, 3604, on pg_ts_dict using btree(dictname name_ops, dictnamespace oid_ops));
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 93c031a..572374e 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -4925,6 +4925,12 @@ DESCR("transform jsonb to tsvector");
 DATA(insert OID = 4212 (  to_tsvector		PGNSP PGUID 12 100 0 0 0 f f f f t f i s 2 0 3614 "3734 114" _null_ _null_ _null_ _null_ _null_ json_to_tsvector_byid _null_ _null_ _null_ ));
 DESCR("transform json to tsvector");
 
+DATA(insert OID = 8891 (  dictionary_map_to_text	PGNSP PGUID 12 100 0 0 0 f f f f t f s s 2 0 25 "26 23" _null_ _null_ _null_ _null_ _null_ dictionary_map_to_text _null_ _null_ _null_ ));
+DESCR("returns text representation of dictionary configurationconfiguration  map");
+
+DATA(insert OID = 8892 (  ts_debug			PGNSP PGUID 12 100 1 0 0 f f f f t t s s 2 0 2249 "3734 25" "{3734,25,25,25,25,25,25,1009}" "{i,i,o,o,o,o,o,o}" "{cfgId,inputText,alias,description,token,dictionaries,command,lexemes}" _null_ _null_ ts_debug _null_ _null_ _null_));
+DESCR("debug function for text search configuration");
+
 DATA(insert OID = 3752 (  tsvector_update_trigger			PGNSP PGUID 12 1 0 0 0 f f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_byid _null_ _null_ _null_ ));
 DESCR("trigger for automatic update of tsvector column");
 DATA(insert OID = 3753 (  tsvector_update_trigger_column	PGNSP PGUID 12 1 0 0 0 f f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_bycolumn _null_ _null_ _null_ ));
diff --git a/src/include/catalog/pg_ts_config_map.h b/src/include/catalog/pg_ts_config_map.h
index 3df0519..ea0fd0a 100644
--- a/src/include/catalog/pg_ts_config_map.h
+++ b/src/include/catalog/pg_ts_config_map.h
@@ -22,6 +22,7 @@
 #define PG_TS_CONFIG_MAP_H
 
 #include "catalog/genbki.h"
+#include "utils/jsonb.h"
 
 /* ----------------
  *		pg_ts_config_map definition.  cpp turns this into
@@ -30,49 +31,106 @@
  */
 #define TSConfigMapRelationId	3603
 
+typedef Jsonb jsonb;
+
 CATALOG(pg_ts_config_map,3603) BKI_WITHOUT_OIDS
 {
 	Oid			mapcfg;			/* OID of configuration owning this entry */
 	int32		maptokentype;	/* token type from parser */
-	int32		mapseqno;		/* order in which to consult dictionaries */
-	Oid			mapdict;		/* dictionary to consult */
+	jsonb		mapdicts;		/* dictionary map Jsonb representation */
 } FormData_pg_ts_config_map;
 
 typedef FormData_pg_ts_config_map *Form_pg_ts_config_map;
 
+typedef struct TSMapExpression
+{
+	int			operator;
+	Oid			dictionary;
+	int			options;
+	bool		is_true;
+	struct TSMapExpression *left;
+	struct TSMapExpression *right;
+} TSMapExpression;
+
+typedef struct TSMapCommand
+{
+	bool		is_expression;
+	void	   *ruleList;		/* this is a TSMapRuleList object */
+	TSMapExpression *expression;
+} TSMapCommand;
+
+typedef struct TSMapCondition
+{
+	TSMapExpression *expression;
+} TSMapCondition;
+
+typedef struct TSMapRule
+{
+	Oid			dictionary;
+	TSMapCondition condition;
+	TSMapCommand command;
+} TSMapRule;
+
+typedef struct TSMapRuleList
+{
+	TSMapRule  *data;
+	int			count;
+} TSMapRuleList;
+
 /* ----------------
  *		compiler constants for pg_ts_config_map
  * ----------------
  */
-#define Natts_pg_ts_config_map				4
+#define Natts_pg_ts_config_map				3
 #define Anum_pg_ts_config_map_mapcfg		1
 #define Anum_pg_ts_config_map_maptokentype	2
-#define Anum_pg_ts_config_map_mapseqno		3
-#define Anum_pg_ts_config_map_mapdict		4
+#define Anum_pg_ts_config_map_mapdicts		3
+
+/* ----------------
+ *		Dictionary map operators
+ * ----------------
+ */
+#define DICTMAP_OP_OR			1
+#define DICTMAP_OP_AND			2
+#define DICTMAP_OP_THEN			3
+#define DICTMAP_OP_MAPBY		4
+#define DICTMAP_OP_UNION		5
+#define DICTMAP_OP_EXCEPT		6
+#define DICTMAP_OP_INTERSECT	7
+#define DICTMAP_OP_NOT			8
+
+/* ----------------
+ *		Dictionary map operant options (bit mask)
+ * ----------------
+ */
+
+#define DICTMAP_OPT_NOT			1
+#define DICTMAP_OPT_IS_NULL		2
+#define DICTMAP_OPT_IS_STOP		4
 
 /* ----------------
  *		initial contents of pg_ts_config_map
  * ----------------
  */
 
-DATA(insert ( 3748	1	1	3765 ));
-DATA(insert ( 3748	2	1	3765 ));
-DATA(insert ( 3748	3	1	3765 ));
-DATA(insert ( 3748	4	1	3765 ));
-DATA(insert ( 3748	5	1	3765 ));
-DATA(insert ( 3748	6	1	3765 ));
-DATA(insert ( 3748	7	1	3765 ));
-DATA(insert ( 3748	8	1	3765 ));
-DATA(insert ( 3748	9	1	3765 ));
-DATA(insert ( 3748	10	1	3765 ));
-DATA(insert ( 3748	11	1	3765 ));
-DATA(insert ( 3748	15	1	3765 ));
-DATA(insert ( 3748	16	1	3765 ));
-DATA(insert ( 3748	17	1	3765 ));
-DATA(insert ( 3748	18	1	3765 ));
-DATA(insert ( 3748	19	1	3765 ));
-DATA(insert ( 3748	20	1	3765 ));
-DATA(insert ( 3748	21	1	3765 ));
-DATA(insert ( 3748	22	1	3765 ));
+DATA(insert ( 3748	1	"[3765]" ));
+DATA(insert ( 3748	2	"[3765]" ));
+DATA(insert ( 3748	3	"[3765]" ));
+DATA(insert ( 3748	4	"[3765]" ));
+DATA(insert ( 3748	5	"[3765]" ));
+DATA(insert ( 3748	6	"[3765]" ));
+DATA(insert ( 3748	7	"[3765]" ));
+DATA(insert ( 3748	8	"[3765]" ));
+DATA(insert ( 3748	9	"[3765]" ));
+DATA(insert ( 3748	10	"[3765]" ));
+DATA(insert ( 3748	11	"[3765]" ));
+DATA(insert ( 3748	15	"[3765]" ));
+DATA(insert ( 3748	16	"[3765]" ));
+DATA(insert ( 3748	17	"[3765]" ));
+DATA(insert ( 3748	18	"[3765]" ));
+DATA(insert ( 3748	19	"[3765]" ));
+DATA(insert ( 3748	20	"[3765]" ));
+DATA(insert ( 3748	21	"[3765]" ));
+DATA(insert ( 3748	22	"[3765]" ));
 
 #endif							/* PG_TS_CONFIG_MAP_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index ffeeb49..d956b56 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -380,6 +380,8 @@ typedef enum NodeTag
 	T_CreateEnumStmt,
 	T_CreateRangeStmt,
 	T_AlterEnumStmt,
+	T_DictMapExprElem,
+	T_DictMapElem,
 	T_AlterTSDictionaryStmt,
 	T_AlterTSConfigurationStmt,
 	T_CreateFdwStmt,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 732e5d6..af4e961 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3369,6 +3369,33 @@ typedef enum AlterTSConfigType
 	ALTER_TSCONFIG_DROP_MAPPING
 } AlterTSConfigType;
 
+typedef enum DictPipeElemType
+{
+	DICT_MAP_OPERAND,
+	DICT_MAP_OPERATOR,
+	DICT_MAP_CONST_TRUE
+} DictPipeType;
+
+typedef struct DictMapExprElem
+{
+	NodeTag		type;
+	int8		kind;			/* See DictMapExprElemType */
+	List	   *dictname;		/* Used in DICT_MAP_EXPR_OPERAND */
+	struct DictMapExprElem *left;	/* Used in DICT_MAP_EXPR_OPERATOR */
+	struct DictMapExprElem *right;	/* Used in DICT_MAP_EXPR_OPERATOR */
+	int8		oper;			/* Used in DICT_MAP_EXPR_OPERATOR */
+	int8		options;		/* Can be used in the future */
+} DictMapExprElem;
+
+typedef struct DictMapElem
+{
+	NodeTag		type;
+	DictMapExprElem *condition;
+	DictMapExprElem *command;
+	List	   *commandmaps;
+	List	   *dictnames;
+} DictMapElem;
+
 typedef struct AlterTSConfigurationStmt
 {
 	NodeTag		type;
@@ -3381,6 +3408,7 @@ typedef struct AlterTSConfigurationStmt
 	 */
 	List	   *tokentype;		/* list of Value strings */
 	List	   *dicts;			/* list of list of Value strings */
+	List	   *dict_map;
 	bool		override;		/* if true - remove old variant */
 	bool		replace;		/* if true - replace dictionary by another */
 	bool		missing_ok;		/* for DROP - skip error if missing? */
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index f50e45e..5100aac 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -240,6 +240,7 @@ PG_KEYWORD("location", LOCATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("lock", LOCK_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("locked", LOCKED, UNRESERVED_KEYWORD)
 PG_KEYWORD("logged", LOGGED, UNRESERVED_KEYWORD)
+PG_KEYWORD("map", MAP, UNRESERVED_KEYWORD)
 PG_KEYWORD("mapping", MAPPING, UNRESERVED_KEYWORD)
 PG_KEYWORD("match", MATCH, UNRESERVED_KEYWORD)
 PG_KEYWORD("materialized", MATERIALIZED, UNRESERVED_KEYWORD)
@@ -376,6 +377,7 @@ PG_KEYWORD("statement", STATEMENT, UNRESERVED_KEYWORD)
 PG_KEYWORD("statistics", STATISTICS, UNRESERVED_KEYWORD)
 PG_KEYWORD("stdin", STDIN, UNRESERVED_KEYWORD)
 PG_KEYWORD("stdout", STDOUT, UNRESERVED_KEYWORD)
+PG_KEYWORD("stopword", STOPWORD, UNRESERVED_KEYWORD)
 PG_KEYWORD("storage", STORAGE, UNRESERVED_KEYWORD)
 PG_KEYWORD("strict", STRICT_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("strip", STRIP_P, UNRESERVED_KEYWORD)
diff --git a/src/include/tsearch/ts_cache.h b/src/include/tsearch/ts_cache.h
index abff0fd..bfde460 100644
--- a/src/include/tsearch/ts_cache.h
+++ b/src/include/tsearch/ts_cache.h
@@ -14,6 +14,7 @@
 #define TS_CACHE_H
 
 #include "utils/guc.h"
+#include "catalog/pg_ts_config_map.h"
 
 
 /*
@@ -66,6 +67,7 @@ typedef struct
 {
 	int			len;
 	Oid		   *dictIds;
+	int32	   *dictOptions;
 } ListDictionary;
 
 typedef struct
@@ -77,7 +79,7 @@ typedef struct
 	Oid			prsId;
 
 	int			lenmap;
-	ListDictionary *map;
+	TSMapRuleList **map;
 } TSConfigCacheEntry;
 
 
diff --git a/src/include/tsearch/ts_configmap.h b/src/include/tsearch/ts_configmap.h
new file mode 100644
index 0000000..73b87de
--- /dev/null
+++ b/src/include/tsearch/ts_configmap.h
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.h
+ *	  internal represtation of text search configuration and utilities for it
+ *
+ * Copyright (c) 1998-2017, PostgreSQL Global Development Group
+ *
+ * src/include/tsearch/ts_utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PG_TS_CONFIGMAP_H_
+#define _PG_TS_CONFIGMAP_H_
+
+#include "utils/jsonb.h"
+#include "catalog/pg_ts_config_map.h"
+
+/*
+ * Configuration storage functions
+ * Provide interface to convert ts_configuration into JSONB and vice versa
+ */
+
+/* Convert TSMapRuleList structure into JSONB */
+extern Jsonb *TSMapToJsonb(TSMapRuleList *rules);
+
+/* Extract TSMapRuleList from JSONB formated data */
+extern TSMapRuleList * JsonbToTSMap(Jsonb *json);
+/* Replace all occurances of oldDict by newDict */
+extern void TSMapReplaceDictionary(TSMapRuleList *rules, Oid oldDict, Oid newDict);
+
+/* Return list of all dictionries in rule list in order they are defined in the lsit as array of Oids */
+extern Oid *TSMapGetDictionariesList(TSMapRuleList *rules);
+
+/* Return list of all dictionries in rule list in order they are defined in the list as ListDictionary structure */
+extern ListDictionary *TSMapGetListDictionary(TSMapRuleList *rules);
+
+/* Move rule list into specified memory context */
+extern TSMapRuleList * TSMapMoveToMemoryContext(TSMapRuleList *rules, MemoryContext context);
+/* Free all nodes of the rule list */
+extern void TSMapFree(TSMapRuleList *rules);
+
+/* Print rule in human-readable format */
+extern void TSMapPrintRule(TSMapRule *rule, StringInfo result, int depth);
+
+/* Print rule list in human-readable format */
+extern void TSMapPrintRuleList(TSMapRuleList *rules, StringInfo result, int depth);
+
+#endif							/* _PG_TS_CONFIGMAP_H_ */
diff --git a/src/include/tsearch/ts_public.h b/src/include/tsearch/ts_public.h
index 94ba7fc..e933d7b 100644
--- a/src/include/tsearch/ts_public.h
+++ b/src/include/tsearch/ts_public.h
@@ -14,6 +14,7 @@
 #define _PG_TS_PUBLIC_H_
 
 #include "tsearch/ts_type.h"
+#include "catalog/pg_ts_config_map.h"
 
 /*
  * Parser's framework
@@ -115,6 +116,7 @@ typedef struct
 #define TSL_ADDPOS		0x01
 #define TSL_PREFIX		0x02
 #define TSL_FILTER		0x04
+#define TSL_MULTI		0x08
 
 /*
  * Struct for supporting complex dictionaries like thesaurus.
diff --git a/src/test/regress/expected/oidjoins.out b/src/test/regress/expected/oidjoins.out
index 234b44f..40029f3 100644
--- a/src/test/regress/expected/oidjoins.out
+++ b/src/test/regress/expected/oidjoins.out
@@ -1081,14 +1081,6 @@ WHERE	mapcfg != 0 AND
 ------+--------
 (0 rows)
 
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
- ctid | mapdict 
-------+---------
-(0 rows)
-
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/expected/tsdicts.out b/src/test/regress/expected/tsdicts.out
index 0744ef8..760673c 100644
--- a/src/test/regress/expected/tsdicts.out
+++ b/src/test/regress/expected/tsdicts.out
@@ -420,6 +420,145 @@ SELECT ts_lexize('thesaurus', 'one');
  {1}
 (1 row)
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_multi(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN english_stem OR simple THEN english_stem UNION simple END;
+SELECT to_tsvector('english_multi', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'books');
+    to_tsvector     
+--------------------
+ 'book':1 'books':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN english_stem OR simple THEN english_stem INTERSECT simple END;
+SELECT to_tsvector('english_multi', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'books');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_multi', 'booking');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN english_stem OR simple THEN simple EXCEPT english_stem END;
+SELECT to_tsvector('english_multi', 'book');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_multi', 'books');
+ to_tsvector 
+-------------
+ 'books':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'booking');
+ to_tsvector 
+-------------
+ 'booking':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH ispell;
+SELECT to_tsvector('english_multi', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN ispell THEN ispell
+	ELSE english_stem
+END;
+SELECT to_tsvector('english_multi', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN hunspell THEN english_stem MAP BY hunspell
+	ELSE english_stem
+END;
+SELECT to_tsvector('english_multi', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_multi', 'booking');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -580,3 +719,74 @@ SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a
  'card':3,10 'invit':2,9 'like':6 'look':5 'order':1,8
 (1 row)
 
+CREATE TEXT SEARCH CONFIGURATION english_multi2(
+					COPY=english_multi
+);
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN english_stem OR simple THEN english_stem UNION simple
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+                                     to_tsvector                                      
+--------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN thesaurus ELSE english_stem
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+              to_tsvector              
+---------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus IS NOT NULL OR english_stem IS NOT NULL THEN thesaurus UNION english_stem
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+                     to_tsvector                     
+-----------------------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5 'supernova':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN simple UNION thesaurus
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+          to_tsvector           
+--------------------------------
+ '1987a':2 'sn':1 'supernova':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN simple UNION thesaurus
+	ELSE simple
+END;
+SELECT to_tsvector('english_multi2', 'one two');
+      to_tsvector       
+------------------------
+ '12':1 'one':1 'two':2
+(1 row)
+
+SELECT to_tsvector('english_multi2', 'one two three');
+            to_tsvector            
+-----------------------------------
+ '123':1 'one':1 'three':3 'two':2
+(1 row)
+
+SELECT to_tsvector('english_multi2', 'one two four');
+           to_tsvector           
+---------------------------------
+ '12':1 'four':3 'one':1 'two':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN thesaurus UNION simple
+	ELSE english_stem UNION simple
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+                                         to_tsvector                                         
+---------------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'ring':3 'rings':3 'sn':5 'supernova':5 'the':1
+(1 row)
+
diff --git a/src/test/regress/expected/tsearch.out b/src/test/regress/expected/tsearch.out
index d63fb12..5b6fe73 100644
--- a/src/test/regress/expected/tsearch.out
+++ b/src/test/regress/expected/tsearch.out
@@ -36,11 +36,11 @@ WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 -----+---------
 (0 rows)
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
- mapcfg | maptokentype | mapseqno 
---------+--------------+----------
+WHERE mapcfg = 0;
+ mapcfg | maptokentype 
+--------+--------------
 (0 rows)
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
@@ -51,8 +51,8 @@ RIGHT JOIN pg_ts_config_map AS m
     ON (tt.cfgid=m.mapcfg AND tt.tokid=m.maptokentype)
 WHERE
     tt.cfgid IS NULL OR tt.tokid IS NULL;
- cfgid | tokid | mapcfg | maptokentype | mapseqno | mapdict 
--------+-------+--------+--------------+----------+---------
+ cfgid | tokid | mapcfg | maptokentype | mapdicts 
+-------+-------+--------+--------------+----------
 (0 rows)
 
 -- test basic text search behavior without indexes, then with
@@ -567,66 +567,65 @@ SELECT length(to_tsvector('english', '345 qwe@efd.r '' http://www.com/ http://ae
 
 -- ts_debug
 SELECT * from ts_debug('english', '<myns:foo-bar_baz.blurfl>abc&nm1;def&#xa9;ghi&#245;jkl</myns:foo-bar_baz.blurfl>');
-   alias   |   description   |           token            |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+----------------------------+----------------+--------------+---------
- tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |              | 
- asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem | {abc}
- entity    | XML entity      | &nm1;                      | {}             |              | 
- asciiword | Word, all ASCII | def                        | {english_stem} | english_stem | {def}
- entity    | XML entity      | &#xa9;                     | {}             |              | 
- asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem | {ghi}
- entity    | XML entity      | &#245;                     | {}             |              | 
- asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem | {jkl}
- tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |              | 
+   alias   |   description   |           token            | dictionaries |   command    | lexemes 
+-----------+-----------------+----------------------------+--------------+--------------+---------
+ tag       | XML tag         | <myns:foo-bar_baz.blurfl>  |              |              | 
+ asciiword | Word, all ASCII | abc                        | english_stem | english_stem | {abc}
+ entity    | XML entity      | &nm1;                      |              |              | 
+ asciiword | Word, all ASCII | def                        | english_stem | english_stem | {def}
+ entity    | XML entity      | &#xa9;                     |              |              | 
+ asciiword | Word, all ASCII | ghi                        | english_stem | english_stem | {ghi}
+ entity    | XML entity      | &#245;                     |              |              | 
+ asciiword | Word, all ASCII | jkl                        | english_stem | english_stem | {jkl}
+ tag       | XML tag         | </myns:foo-bar_baz.blurfl> |              |              | 
 (9 rows)
 
 -- check parsing of URLs
 SELECT * from ts_debug('english', 'http://www.harewoodsolutions.co.uk/press.aspx</span>');
-  alias   |  description  |                 token                  | dictionaries | dictionary |                 lexemes                  
-----------+---------------+----------------------------------------+--------------+------------+------------------------------------------
- protocol | Protocol head | http://                                | {}           |            | 
- url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple     | {www.harewoodsolutions.co.uk/press.aspx}
- host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple     | {www.harewoodsolutions.co.uk}
- url_path | URL path      | /press.aspx                            | {simple}     | simple     | {/press.aspx}
- tag      | XML tag       | </span>                                | {}           |            | 
+  alias   |  description  |                 token                  | dictionaries | command |                 lexemes                  
+----------+---------------+----------------------------------------+--------------+---------+------------------------------------------
+ protocol | Protocol head | http://                                |              |         | 
+ url      | URL           | www.harewoodsolutions.co.uk/press.aspx | simple       | simple  | {www.harewoodsolutions.co.uk/press.aspx}
+ host     | Host          | www.harewoodsolutions.co.uk            | simple       | simple  | {www.harewoodsolutions.co.uk}
+ url_path | URL path      | /press.aspx                            | simple       | simple  | {/press.aspx}
+ tag      | XML tag       | </span>                                |              |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
-  alias   |  description  |           token            | dictionaries | dictionary |           lexemes            
-----------+---------------+----------------------------+--------------+------------+------------------------------
- protocol | Protocol head | http://                    | {}           |            | 
- url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple     | {aew.wer0c.ewr/id?ad=qwe&dw}
- host     | Host          | aew.wer0c.ewr              | {simple}     | simple     | {aew.wer0c.ewr}
- url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple     | {/id?ad=qwe&dw}
- tag      | XML tag       | <span>                     | {}           |            | 
+  alias   |  description  |           token            | dictionaries | command |           lexemes            
+----------+---------------+----------------------------+--------------+---------+------------------------------
+ protocol | Protocol head | http://                    |              |         | 
+ url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | simple       | simple  | {aew.wer0c.ewr/id?ad=qwe&dw}
+ host     | Host          | aew.wer0c.ewr              | simple       | simple  | {aew.wer0c.ewr}
+ url_path | URL path      | /id?ad=qwe&dw              | simple       | simple  | {/id?ad=qwe&dw}
+ tag      | XML tag       | <span>                     |              |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
-  alias   |  description  |        token         | dictionaries | dictionary |        lexemes         
-----------+---------------+----------------------+--------------+------------+------------------------
- protocol | Protocol head | http://              | {}           |            | 
- url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple     | {5aew.werc.ewr:8100/?}
- host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path      | /?                   | {simple}     | simple     | {/?}
+  alias   |  description  |        token         | dictionaries | command |        lexemes         
+----------+---------------+----------------------+--------------+---------+------------------------
+ protocol | Protocol head | http://              |              |         | 
+ url      | URL           | 5aew.werc.ewr:8100/? | simple       | simple  | {5aew.werc.ewr:8100/?}
+ host     | Host          | 5aew.werc.ewr:8100   | simple       | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path      | /?                   | simple       | simple  | {/?}
 (4 rows)
 
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
-  alias   | description |         token          | dictionaries | dictionary |         lexemes          
-----------+-------------+------------------------+--------------+------------+--------------------------
- url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple     | {5aew.werc.ewr:8100/?xx}
- host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path    | /?xx                   | {simple}     | simple     | {/?xx}
+  alias   | description |         token          | dictionaries | command |         lexemes          
+----------+-------------+------------------------+--------------+---------+--------------------------
+ url      | URL         | 5aew.werc.ewr:8100/?xx | simple       | simple  | {5aew.werc.ewr:8100/?xx}
+ host     | Host        | 5aew.werc.ewr:8100     | simple       | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path    | /?xx                   | simple       | simple  | {/?xx}
 (3 rows)
 
 SELECT token, alias,
-  dictionaries, dictionaries is null as dnull, array_dims(dictionaries) as ddims,
-  lexemes, lexemes is null as lnull, array_dims(lexemes) as ldims
+  dictionaries, lexemes, lexemes is null as lnull, array_dims(lexemes) as ldims
 from ts_debug('english', 'a title');
- token |   alias   |  dictionaries  | dnull | ddims | lexemes | lnull | ldims 
--------+-----------+----------------+-------+-------+---------+-------+-------
- a     | asciiword | {english_stem} | f     | [1:1] | {}      | f     | 
-       | blank     | {}             | f     |       |         | t     | 
- title | asciiword | {english_stem} | f     | [1:1] | {titl}  | f     | [1:1]
+ token |   alias   | dictionaries | lexemes | lnull | ldims 
+-------+-----------+--------------+---------+-------+-------
+ a     | asciiword | english_stem | {}      | f     | 
+       | blank     |              |         | t     | 
+ title | asciiword | english_stem | {titl}  | f     | [1:1]
 (3 rows)
 
 -- to_tsquery
diff --git a/src/test/regress/sql/oidjoins.sql b/src/test/regress/sql/oidjoins.sql
index fcf9990..320e220 100644
--- a/src/test/regress/sql/oidjoins.sql
+++ b/src/test/regress/sql/oidjoins.sql
@@ -541,10 +541,6 @@ SELECT	ctid, mapcfg
 FROM	pg_catalog.pg_ts_config_map fk
 WHERE	mapcfg != 0 AND
 	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_config pk WHERE pk.oid = fk.mapcfg);
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/sql/tsdicts.sql b/src/test/regress/sql/tsdicts.sql
index a5a569e..337302b 100644
--- a/src/test/regress/sql/tsdicts.sql
+++ b/src/test/regress/sql/tsdicts.sql
@@ -117,6 +117,68 @@ CREATE TEXT SEARCH DICTIONARY thesaurus (
 
 SELECT ts_lexize('thesaurus', 'one');
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_multi(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN english_stem OR simple THEN english_stem UNION simple END;
+
+SELECT to_tsvector('english_multi', 'book');
+SELECT to_tsvector('english_multi', 'books');
+SELECT to_tsvector('english_multi', 'booking');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN english_stem OR simple THEN english_stem INTERSECT simple END;
+
+SELECT to_tsvector('english_multi', 'book');
+SELECT to_tsvector('english_multi', 'books');
+SELECT to_tsvector('english_multi', 'booking');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN english_stem OR simple THEN simple EXCEPT english_stem END;
+
+SELECT to_tsvector('english_multi', 'book');
+SELECT to_tsvector('english_multi', 'books');
+SELECT to_tsvector('english_multi', 'booking');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH ispell;
+
+SELECT to_tsvector('english_multi', 'book');
+SELECT to_tsvector('english_multi', 'books');
+SELECT to_tsvector('english_multi', 'booking');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN ispell THEN ispell
+	ELSE english_stem
+END;
+
+SELECT to_tsvector('english_multi', 'book');
+SELECT to_tsvector('english_multi', 'books');
+SELECT to_tsvector('english_multi', 'booking');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi ALTER MAPPING FOR
+	asciiword
+	WITH CASE
+	WHEN hunspell THEN english_stem MAP BY hunspell
+	ELSE english_stem
+END;
+
+SELECT to_tsvector('english_multi', 'book');
+SELECT to_tsvector('english_multi', 'books');
+SELECT to_tsvector('english_multi', 'booking');
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -188,3 +250,41 @@ ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR
 SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one');
 SELECT to_tsvector('thesaurus_tst', 'Supernovae star is very new star and usually called supernovae (abbreviation SN)');
 SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a tickets');
+CREATE TEXT SEARCH CONFIGURATION english_multi2(
+					COPY=english_multi
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN english_stem OR simple THEN english_stem UNION simple
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN thesaurus ELSE english_stem
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus IS NOT NULL OR english_stem IS NOT NULL THEN thesaurus UNION english_stem
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN simple UNION thesaurus
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN simple UNION thesaurus
+	ELSE simple
+END;
+SELECT to_tsvector('english_multi2', 'one two');
+SELECT to_tsvector('english_multi2', 'one two three');
+SELECT to_tsvector('english_multi2', 'one two four');
+
+ALTER TEXT SEARCH CONFIGURATION english_multi2 ALTER MAPPING FOR asciiword WITH CASE
+	WHEN thesaurus THEN thesaurus UNION simple
+	ELSE english_stem UNION simple
+END;
+SELECT to_tsvector('english_multi2', 'The Mysterious Rings of Supernova 1987A');
+
diff --git a/src/test/regress/sql/tsearch.sql b/src/test/regress/sql/tsearch.sql
index 1c8520b..8ef3d71 100644
--- a/src/test/regress/sql/tsearch.sql
+++ b/src/test/regress/sql/tsearch.sql
@@ -26,9 +26,9 @@ SELECT oid, cfgname
 FROM pg_ts_config
 WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
+WHERE mapcfg = 0;
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
 SELECT * FROM
@@ -146,8 +146,7 @@ SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
 SELECT token, alias,
-  dictionaries, dictionaries is null as dnull, array_dims(dictionaries) as ddims,
-  lexemes, lexemes is null as lnull, array_dims(lexemes) as ldims
+  dictionaries, lexemes, lexemes is null as lnull, array_dims(lexemes) as ldims
 from ts_debug('english', 'a title');
 
 -- to_tsquery
#3Emre Hasegeli
emre@hasegeli.com
In reply to: Aleksandr Parfenov (#1)
Re: Flexible configuration for full-text search

The patch introduces way to configure FTS based on CASE/WHEN/THEN/ELSE
construction.

Interesting feature. I needed this flexibility before when I was
implementing text search for a Turkish private listing application.
Aleksandr and Arthur were kind enough to discuss it with me off-list
today.

1) Multilingual search. Can be used for FTS on a set of documents in
different languages (example for German and English languages).

ALTER TEXT SEARCH CONFIGURATION multi
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
word, hword, hword_part WITH CASE
WHEN english_hunspell AND german_hunspell THEN
english_hunspell UNION german_hunspell
WHEN english_hunspell THEN english_hunspell
WHEN german_hunspell THEN german_hunspell
ELSE german_stem UNION english_stem
END;

I understand the need to support branching, but this syntax is overly
complicated. I don't think there is any need to support different set
of dictionaries as condition and action. Something like this might
work better:

ALTER TEXT SEARCH CONFIGURATION multi
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
word, hword, hword_part WITH
CASE english_hunspell UNION german_hunspell
WHEN MATCH THEN KEEP
ELSE german_stem UNION english_stem
END;

To put it formally:

ALTER TEXT SEARCH CONFIGURATION name
ADD MAPPING FOR token_type [, ... ] WITH config

where config is one of:

dictionary_name
config { UNION | INTERSECT | EXCEPT } config
CASE config WHEN [ NO ] MATCH THEN [ KEEP ELSE ] config END

2) Combination of exact search with morphological one. This patch not
fully solve the problem but it is a step toward solution. Currently, we
should split exact and morphological search in query manually and use
separate index for each part. With new way to configure FTS we can use
following configuration:

ALTER TEXT SEARCH CONFIGURATION exact_and_morph
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
word, hword, hword_part WITH CASE
WHEN english_hunspell THEN english_hunspell UNION simple
ELSE english_stem UNION simple
END

This could be:

CASE english_hunspell
THEN KEEP
ELSE english_stem
END
UNION
simple

3) Using different dictionaries for recognizing and output generation.
As I mentioned before, in new syntax condition and command are separate
and we can use it for some more complex text processing. Here an
example for processing only nouns:

ALTER TEXT SEARCH CONFIGURATION nouns_only
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
word, hword, hword_part WITH CASE
WHEN english_noun THEN english_hunspell
END

This would also still work with the simpler syntax because
"english_noun", still being a dictionary, would pass the tokens to the
next one.

4) Special stopword processing allows us to discard stopwords even if
the main dictionary doesn't support such feature (in example pl_ispell
dictionary keeps stopwords in text):

ALTER TEXT SEARCH CONFIGURATION pl_without_stops
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
word, hword, hword_part WITH CASE
WHEN simple_pl IS NOT STOPWORD THEN pl_ispell
END

Instead of supporting old way of putting stopwords on dictionaries, we
can make them dictionaries on their own. This would then become
something like:

CASE polish_stopword
WHEN NO MATCH THEN polish_isspell
END

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Emre Hasegeli (#3)
Re: Flexible configuration for full-text search

I'm mostly happy with mentioned modifications, but I have few questions
to clarify some points. I will send new patch in week or two.

On Thu, 26 Oct 2017 20:01:14 +0200
Emre Hasegeli <emre@hasegeli.com> wrote:

To put it formally:

ALTER TEXT SEARCH CONFIGURATION name
ADD MAPPING FOR token_type [, ... ] WITH config

where config is one of:

dictionary_name
config { UNION | INTERSECT | EXCEPT } config
CASE config WHEN [ NO ] MATCH THEN [ KEEP ELSE ] config END

According to formal definition following configurations are valid:

CASE english_hunspell WHEN MATCH THEN KEEP ELSE simple END
CASE english_noun WHEN MATCH THEN english_hunspell END

But configuration:

CASE english_noun WHEN MATCH THEN english_hunspell ELSE simple END

is not (as I understand ELSE can be used only with KEEP).

I think we should decide to allow or disallow usage of different
dictionaries for match checking (between CASE and WHEN) and a result
(after THEN). If answer is 'allow', maybe we should allow the
third example too for consistency in configurations.

3) Using different dictionaries for recognizing and output
generation. As I mentioned before, in new syntax condition and
command are separate and we can use it for some more complex text
processing. Here an example for processing only nouns:

ALTER TEXT SEARCH CONFIGURATION nouns_only
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
word, hword, hword_part WITH CASE
WHEN english_noun THEN english_hunspell
END

This would also still work with the simpler syntax because
"english_noun", still being a dictionary, would pass the tokens to the
next one.

Based on formal definition it is possible to describe this example in
following manner:
CASE english_noun WHEN MATCH THEN english_hunspell END

The question is same as in the previous example.

Instead of supporting old way of putting stopwords on dictionaries, we
can make them dictionaries on their own. This would then become
something like:

CASE polish_stopword
WHEN NO MATCH THEN polish_isspell
END

Currently, stopwords increment position, for example:
SELECT to_tsvector('english','a test message');
---------------------
'messag':3 'test':2

A stopword 'a' has a position 1 but it is not in the vector.

If we want to save this behavior, we should somehow pass a stopword to
tsvector composition function (parsetext in ts_parse.c) for counter
increment or increment it in another way. Currently, an empty lexemes
array is passed as a result of LexizeExec.

One of possible way to do so is something like:
CASE polish_stopword
WHEN MATCH THEN KEEP -- stopword counting
ELSE polish_isspell
END

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Emre Hasegeli
emre@hasegeli.com
In reply to: Aleksandr Parfenov (#4)
Re: Flexible configuration for full-text search

I'm mostly happy with mentioned modifications, but I have few questions
to clarify some points. I will send new patch in week or two.

I am glad you liked it. Though, I think we should get approval from
more senior community members or committers about the syntax, before
we put more effort to the code.

But configuration:

CASE english_noun WHEN MATCH THEN english_hunspell ELSE simple END

is not (as I understand ELSE can be used only with KEEP).

I think we should decide to allow or disallow usage of different
dictionaries for match checking (between CASE and WHEN) and a result
(after THEN). If answer is 'allow', maybe we should allow the
third example too for consistency in configurations.

I think you are right. We better allow this too. Then the CASE syntax becomes:

CASE config
WHEN [ NO ] MATCH THEN { KEEP | config }
[ ELSE config ]
END

Based on formal definition it is possible to describe this example in
following manner:
CASE english_noun WHEN MATCH THEN english_hunspell END

The question is same as in the previous example.

I couldn't understand the question.

Currently, stopwords increment position, for example:
SELECT to_tsvector('english','a test message');
---------------------
'messag':3 'test':2

A stopword 'a' has a position 1 but it is not in the vector.

Is this problem only applies to stopwords and the whole thing we are
inventing? Shouldn't we preserve the positions through the pipeline?

If we want to save this behavior, we should somehow pass a stopword to
tsvector composition function (parsetext in ts_parse.c) for counter
increment or increment it in another way. Currently, an empty lexemes
array is passed as a result of LexizeExec.

One of possible way to do so is something like:
CASE polish_stopword
WHEN MATCH THEN KEEP -- stopword counting
ELSE polish_isspell
END

This would mean keeping the stopwords. What we want is

CASE polish_stopword -- stopword counting
WHEN NO MATCH THEN polish_isspell
END

Do you think it is possible?

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Thomas Munro
thomas.munro@enterprisedb.com
In reply to: Aleksandr Parfenov (#2)
Re: Flexible configuration for full-text search

On Sat, Oct 21, 2017 at 1:39 AM, Aleksandr Parfenov
<a.parfenov@postgrespro.ru> wrote:

In attachment updated patch with fixes of empty XML tags in
documentation.

Hi Aleksandr,

I'm not sure if this is expected at this stage, but just in case you
aren't aware, with this version of the patch the binary upgrade test
in
src/bin/pg_dump/t/002_pg_dump.pl fails for me:

# Failed test 'binary_upgrade: dumps ALTER TEXT SEARCH CONFIGURATION
dump_test.alt_ts_conf1 ...'
# at t/002_pg_dump.pl line 6715.

--
Thomas Munro
http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Thomas Munro (#6)
Re: Flexible configuration for full-text search

On Mon, 6 Nov 2017 18:05:23 +1300
Thomas Munro <thomas.munro@enterprisedb.com> wrote:

On Sat, Oct 21, 2017 at 1:39 AM, Aleksandr Parfenov
<a.parfenov@postgrespro.ru> wrote:

In attachment updated patch with fixes of empty XML tags in
documentation.

Hi Aleksandr,

I'm not sure if this is expected at this stage, but just in case you
aren't aware, with this version of the patch the binary upgrade test
in
src/bin/pg_dump/t/002_pg_dump.pl fails for me:

# Failed test 'binary_upgrade: dumps ALTER TEXT SEARCH CONFIGURATION
dump_test.alt_ts_conf1 ...'
# at t/002_pg_dump.pl line 6715.

Hi Thomas,

Thank you for noticing it. I will investigate it during work on next
version of patch.

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Emre Hasegeli (#5)
Re: Flexible configuration for full-text search

On Tue, 31 Oct 2017 09:47:57 +0100
Emre Hasegeli <emre@hasegeli.com> wrote:

If we want to save this behavior, we should somehow pass a stopword
to tsvector composition function (parsetext in ts_parse.c) for
counter increment or increment it in another way. Currently, an
empty lexemes array is passed as a result of LexizeExec.

One of possible way to do so is something like:
CASE polish_stopword
WHEN MATCH THEN KEEP -- stopword counting
ELSE polish_isspell
END

This would mean keeping the stopwords. What we want is

CASE polish_stopword -- stopword counting
WHEN NO MATCH THEN polish_isspell
END

Do you think it is possible?

Hi Emre,

I thought how it can be implemented. The way I see is to increment
word counter in case if any chcked dictionary matched the word even
without returning lexeme. Main drawback is that counter increment is
implicit.

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Michael Paquier
michael.paquier@gmail.com
In reply to: Aleksandr Parfenov (#7)
Re: [HACKERS] Flexible configuration for full-text search

On Tue, Nov 7, 2017 at 3:18 PM, Aleksandr Parfenov
<a.parfenov@postgrespro.ru> wrote:

On Mon, 6 Nov 2017 18:05:23 +1300
Thomas Munro <thomas.munro@enterprisedb.com> wrote:

On Sat, Oct 21, 2017 at 1:39 AM, Aleksandr Parfenov
<a.parfenov@postgrespro.ru> wrote:

In attachment updated patch with fixes of empty XML tags in
documentation.

Hi Aleksandr,

I'm not sure if this is expected at this stage, but just in case you
aren't aware, with this version of the patch the binary upgrade test
in
src/bin/pg_dump/t/002_pg_dump.pl fails for me:

# Failed test 'binary_upgrade: dumps ALTER TEXT SEARCH CONFIGURATION
dump_test.alt_ts_conf1 ...'
# at t/002_pg_dump.pl line 6715.

Hi Thomas,

Thank you for noticing it. I will investigate it during work on next
version of patch.

Next version pending after three weeks, I am marking the patch as
returned with feedback for now.
--
Michael

#10Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Emre Hasegeli (#5)
2 attachment(s)
Re: [HACKERS] Flexible configuration for full-text search

Hi,

On Tue, 31 Oct 2017 09:47:57 +0100
Emre Hasegeli <emre@hasegeli.com> wrote:

I am glad you liked it. Though, I think we should get approval from
more senior community members or committers about the syntax, before
we put more effort to the code.

I postpone a new version of the patch in order to wait for more
feedback, but I think now is the time to send it to push discussion
further.

I keep in mind all the feedback during reworking a patch, so the FTS
syntax and behavior changed since previous one. But I'm not sure about
one last thing:

CASE polish_stopword -- stopword counting
WHEN NO MATCH THEN polish_isspell
END

Do you think it is possible?

If we will count tokens in such a case, any dropped words will be
counted too. For example:

CASE banned_words
WHEN NO MATCH THEN some_dictionary
END

And I'm not sure about that behavior due to implicit use of the
token produced by 'banned_words' dictionary in the example. In the third
version of patch I keep the behavior without an implicit use of
tokens for counting.

The new version of the patch is in attachment as well as a
little README file with a description of changes in each file. Any
feedback is welcome.

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

Attachments:

0001-flexible-fts-configuration-v3.patchtext/x-patchDownload
diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml
index ebe0b94..58ce4c7 100644
--- a/doc/src/sgml/ref/alter_tsconfig.sgml
+++ b/doc/src/sgml/ref/alter_tsconfig.sgml
@@ -22,8 +22,12 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionaries_map</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionaries_map</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING REPLACE <replaceable class="parameter">old_dictionary</replaceable> WITH <replaceable class="parameter">new_dictionary</replaceable>
@@ -89,6 +93,17 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
    </varlistentry>
 
    <varlistentry>
+    <term><replaceable class="parameter">dictionary_expression</replaceable></term>
+    <listitem>
+     <para>
+      The expression of dictionaries tree. The dctionary expression
+      is a triple of condition/command/elsebranch that define way to process
+      text. The elsebranch part is optional.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry>
     <term><replaceable class="parameter">old_dictionary</replaceable></term>
     <listitem>
      <para>
@@ -133,7 +148,7 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
      </para>
     </listitem>
    </varlistentry>
- </variablelist>
+  </variablelist>
 
   <para>
    The <literal>ADD MAPPING FOR</literal> form installs a list of dictionaries to be
@@ -155,6 +170,61 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
  </refsect1>
 
  <refsect1>
+  <title>Dictionaries map</title>
+
+  <refsect2>
+   <title>Format</title>
+   <programlisting>
+    <replaceable class="parameter">dictionary_name</replaceable>
+
+    <replaceable class="parameter">dictionaries_map</replaceable> { UNION | EXCEPT | INTERSECT | MAP } <replaceable class="parameter">dictionaries_map</replaceable>
+
+    CASE
+      <replaceable class="parameter">dictionaries_map</replaceable> WHEN <optional>NO</optional> MATCH THEN <replaceable class="parameter">command</replaceable>
+      <optional> ELSE <replaceable class="parameter">dictionaries_map</replaceable> </optional>
+    END
+   </programlisting>
+   <para>
+    A command is
+   </para>
+
+   <programlisting>
+    <replaceable class="parameter">dictionaries_map</replaceable>
+    or
+    KEEP
+   </programlisting>
+  </refsect2>
+
+  <refsect2>
+   <title>Description</title>
+   <para>
+    <replaceable class="parameter">dictionaries_map</replaceable> can be used
+    in three different formats. The most simple format is name of dictionary to
+    use for tokens processing.
+   </para>
+   <para>
+    In order to use more than one dictionary
+    simultaneously use should interconnect dictionaries by operators. Operators
+    <literal>UNION</literal>, <literal>EXCEPT</literal> and 
+    <literal>INTERSECT</literal> have same meaning as in operations on sets.
+    Special operator <literal>MAP</literal> gets output of left subexpression
+    and uses it as an input to right subexpression.
+   </para>
+   <para>
+    The last but not the least format of
+    <replaceable class="parameter">dictionaries_map</replaceable> is similar to
+    <literal>CASE/WHEN/THEN/ELSE</literal> structure. It consists of three
+    replaceable parts. First one is configuration who to construct lexemes set
+    for matching check. If the condition is triggered, the command is executed.
+    Use command <literal>KEEP</literal> in order to avoid repeating of the same
+    configuration in condition and command part. However, command differ from
+    condition configuration. The <literal>ELSE</literal> branch is executed
+    overwise.
+   </para>
+  </refsect2>
+ </refsect1>
+
+ <refsect1>
   <title>Examples</title>
 
   <para>
diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml
index 4dc52ec..fb05de4 100644
--- a/doc/src/sgml/textsearch.sgml
+++ b/doc/src/sgml/textsearch.sgml
@@ -732,10 +732,11 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     The <function>to_tsvector</function> function internally calls a parser
     which breaks the document text into tokens and assigns a type to
     each token.  For each token, a list of
-    dictionaries (<xref linkend="textsearch-dictionaries"/>) is consulted,
-    where the list can vary depending on the token type.  The first dictionary
-    that <firstterm>recognizes</firstterm> the token emits one or more normalized
-    <firstterm>lexemes</firstterm> to represent the token.  For example,
+    condition/command pairs is consulted, where the list can vary depending
+    on the token type, condition and command are logical and set expressions
+    on dictionaries(<xref linkend="textsearch-dictionaries"/>) respectively.
+    The first pair with true-resulted condition emits one or more normalized
+    <firstterm>lexemes</firstterm> to represent the token based on command.  For example,
     <literal>rats</literal> became <literal>rat</literal> because one of the
     dictionaries recognized that the word <literal>rats</literal> is a plural
     form of <literal>rat</literal>.  Some words are recognized as
@@ -743,7 +744,7 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     causes them to be ignored since they occur too frequently to be useful in
     searching.  In our example these are
     <literal>a</literal>, <literal>on</literal>, and <literal>it</literal>.
-    If no dictionary in the list recognizes the token then it is also ignored.
+    If none of conditions is <literal>true</literal> the token then is also ignored.
     In this example that happened to the punctuation sign <literal>-</literal>
     because there are in fact no dictionaries assigned for its token type
     (<literal>Space symbols</literal>), meaning space tokens will never be
@@ -2229,14 +2230,6 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
     </listitem>
     <listitem>
      <para>
-      a single lexeme with the <literal>TSL_FILTER</literal> flag set, to replace
-      the original token with a new token to be passed to subsequent
-      dictionaries (a dictionary that does this is called a
-      <firstterm>filtering dictionary</firstterm>)
-     </para>
-    </listitem>
-    <listitem>
-     <para>
       an empty array if the dictionary knows the token, but it is a stop word
      </para>
     </listitem>
@@ -2264,38 +2257,81 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
    type that the parser can return, a separate list of dictionaries is
    specified by the configuration.  When a token of that type is found
    by the parser, each dictionary in the list is consulted in turn,
-   until some dictionary recognizes it as a known word.  If it is identified
-   as a stop word, or if no dictionary recognizes the token, it will be
-   discarded and not indexed or searched for.
-   Normally, the first dictionary that returns a non-<literal>NULL</literal>
-   output determines the result, and any remaining dictionaries are not
-   consulted; but a filtering dictionary can replace the given word
-   with a modified word, which is then passed to subsequent dictionaries.
+   until command is not selected based on it's condition. If none of cases is
+   selected it will be discarded and not indexed or searched for.
   </para>
 
   <para>
-   The general rule for configuring a list of dictionaries
-   is to place first the most narrow, most specific dictionary, then the more
-   general dictionaries, finishing with a very general dictionary, like
+   A tree of cases is described as condition/command/elsebranch triples. Each
+   condition is evaluated in order to select appropriate command to generate
+   resulted set of lexems.
+  </para>
+
+  <para>
+   A condition is a dictionary expression with dictionaries used as operands and
+   basic set operators <literal>UNION</literal>, <literal>EXCEPT</literal>, <literal>INTERSECT</literal>
+   and special operator <literal>MAP</literal>.
+
+   Special operator <literal>MAP</literal> use output of left subexpression as
+   input for right subexpression.
+  </para>
+
+  <para>
+    Rules to write command are same as for condition with additional keyword
+    <literal>KEEP</literal> to use result of condition as an output.
+  </para>
+
+  <para>
+   A comma-separated list of dictionaries is a simplified veriant of text
+   search configuraion. Each dictionary consulted to process a token and first
+   non-<literal>NULL</literal> output is accepted as a procssing result.
+  </para>
+
+  <para>
+   The general rule for configuring tokens processing
+   is to place first case with the most narrow, most specific dictionary, then the more
+   general dictionaries, finishing with a very general dictionaries, like
    a <application>Snowball</application> stemmer or <literal>simple</literal>, which
-   recognizes everything.  For example, for an astronomy-specific search
+   recognizes everything. For example, for an astronomy-specific search
    (<literal>astro_en</literal> configuration) one could bind token type
    <type>asciiword</type> (ASCII word) to a synonym dictionary of astronomical
    terms, a general English dictionary and a <application>Snowball</application> English
-   stemmer:
+   stemmer in comma-separated variant of mapping:
 
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION astro_en
     ADD MAPPING FOR asciiword WITH astrosyn, english_ispell, english_stem;
 </programlisting>
+
+   Another example is a configuration for both english and german languages via
+   operator-separated variant of mapping:
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION multi_en_de
+    ADD MAPPING FOR asciiword, word WITH
+        CASE
+            english_ispell WHEN MATCH THEN KEEP
+            ELSE english_stem
+        END
+        UNION
+        CASE
+            german_ispell WHEN MATCH THEN KEEP
+            ELSE german_stem
+        END;
+</programlisting>
+
   </para>
 
   <para>
-   A filtering dictionary can be placed anywhere in the list, except at the
-   end where it'd be useless.  Filtering dictionaries are useful to partially
+   A filtering dictionary can be placed anywhere in comma-separated list,
+   except at the end where it'd be useless.
+   Filtering dictionaries are useful to partially
    normalize words to simplify the task of later dictionaries.  For example,
    a filtering dictionary could be used to remove accents from accented
    letters, as is done by the <xref linkend="unaccent"/> module.
+   Otherwise filter dictionary should be placed at left of <literal>MAP</literal>
+   operator. If filter dictionary returns <literal>NULL</literal> it pass initial token
+   further in processing chain.
   </para>
 
   <sect2 id="textsearch-stopwords">
@@ -2462,9 +2498,9 @@ SELECT ts_lexize('public.simple_dict','The');
 
 <screen>
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari}
+   alias   |   description   | token | dictionaries |   command    | lexemes 
+-----------+-----------------+-------+--------------+--------------+---------
+ asciiword | Word, all ASCII | Paris | english_stem | english_stem | {pari}
 
 CREATE TEXT SEARCH DICTIONARY my_synonym (
     TEMPLATE = synonym,
@@ -2476,9 +2512,9 @@ ALTER TEXT SEARCH CONFIGURATION english
     WITH my_synonym, english_stem;
 
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |       dictionaries        | dictionary | lexemes 
------------+-----------------+-------+---------------------------+------------+---------
- asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris}
+   alias   |   description   | token |      dictionaries       |  command   | lexemes 
+-----------+-----------------+-------+-------------------------+------------+---------
+ asciiword | Word, all ASCII | Paris | my_synonym,english_stem | my_synonym | {paris}
 </screen>
    </para>
 
@@ -3107,6 +3143,20 @@ CREATE TEXT SEARCH DICTIONARY english_ispell (
 ALTER TEXT SEARCH CONFIGURATION pg
     ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
                       word, hword, hword_part
+    WITH 
+      CASE
+        WHEN pg_dict IS NOT NULL THEN pg_dict
+        WHEN english_ispell THEN english_ispell
+        ELSE english_stem
+      END;
+</programlisting>
+
+    Or use alternative comma-separated syntax:
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION pg
+    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
+                      word, hword, hword_part
     WITH pg_dict, english_ispell, english_stem;
 </programlisting>
 
@@ -3181,8 +3231,8 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
          OUT <replaceable class="parameter">alias</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">description</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">token</replaceable> <type>text</type>,
-         OUT <replaceable class="parameter">dictionaries</replaceable> <type>regdictionary[]</type>,
-         OUT <replaceable class="parameter">dictionary</replaceable> <type>regdictionary</type>,
+         OUT <replaceable class="parameter">dictionaries</replaceable> <type>text</type>,
+         OUT <replaceable class="parameter">command</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)
          returns setof record
 </synopsis>
@@ -3220,20 +3270,20 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionaries</replaceable> <type>regdictionary[]</type> &mdash; the
-       dictionaries selected by the configuration for this token type
+       <replaceable>dictionaries</replaceable> <type>text</type> &mdash; the
+       dictionaries defined by the configuration for this token type
       </para>
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionary</replaceable> <type>regdictionary</type> &mdash; the dictionary
-       that recognized the token, or <literal>NULL</literal> if none did
+       <replaceable>command</replaceable> <type>text</type> &mdash; the command that describes
+       the way to generate output
       </para>
      </listitem>
      <listitem>
       <para>
        <replaceable>lexemes</replaceable> <type>text[]</type> &mdash; the lexeme(s) produced
-       by the dictionary that recognized the token, or <literal>NULL</literal> if
+       by the command selected acording conditions, or <literal>NULL</literal> if
        none did; an empty array (<literal>{}</literal>) means it was recognized as a
        stop word
       </para>
@@ -3246,32 +3296,32 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
 
 <screen>
 SELECT * FROM ts_debug('english','a fat  cat sat on a mat - it ate a fat rats');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | cat   | {english_stem} | english_stem | {cat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | sat   | {english_stem} | english_stem | {sat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | on    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | mat   | {english_stem} | english_stem | {mat}
- blank     | Space symbols   |       | {}             |              | 
- blank     | Space symbols   | -     | {}             |              | 
- asciiword | Word, all ASCII | it    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | ate   | {english_stem} | english_stem | {ate}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | rats  | {english_stem} | english_stem | {rat}
+   alias   |   description   | token | dictionaries |   command    | lexemes 
+-----------+-----------------+-------+--------------+--------------+---------
+ asciiword | Word, all ASCII | a     | english_stem | english_stem | {}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | fat   | english_stem | english_stem | {fat}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | cat   | english_stem | english_stem | {cat}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | sat   | english_stem | english_stem | {sat}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | on    | english_stem | english_stem | {}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | a     | english_stem | english_stem | {}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | mat   | english_stem | english_stem | {mat}
+ blank     | Space symbols   |       |              |              | 
+ blank     | Space symbols   | -     |              |              | 
+ asciiword | Word, all ASCII | it    | english_stem | english_stem | {}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | ate   | english_stem | english_stem | {ate}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | a     | english_stem | english_stem | {}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | fat   | english_stem | english_stem | {fat}
+ blank     | Space symbols   |       |              |              | 
+ asciiword | Word, all ASCII | rats  | english_stem | english_stem | {rat}
 </screen>
   </para>
 
@@ -3297,13 +3347,13 @@ ALTER TEXT SEARCH CONFIGURATION public.english
 
 <screen>
 SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
-   alias   |   description   |    token    |         dictionaries          |   dictionary   |   lexemes   
------------+-----------------+-------------+-------------------------------+----------------+-------------
- asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | english_ispell | {}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | english_ispell | {bright}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | english_stem   | {supernova}
+   alias   |   description   |    token    |        dictionaries         |    command     |   lexemes   
+-----------+-----------------+-------------+-----------------------------+----------------+-------------
+ asciiword | Word, all ASCII | The         | english_ispell,english_stem | english_ispell | {}
+ blank     | Space symbols   |             |                             |                | 
+ asciiword | Word, all ASCII | Brightest   | english_ispell,english_stem | english_ispell | {bright}
+ blank     | Space symbols   |             |                             |                | 
+ asciiword | Word, all ASCII | supernovaes | english_ispell,english_stem | english_stem   | {supernova}
 </screen>
 
   <para>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 394aea8..a997ec3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -944,55 +944,13 @@ GRANT SELECT (subdbid, subname, subowner, subenabled, subslotname, subpublicatio
 -- Tsearch debug function.  Defined here because it'd be pretty unwieldy
 -- to put it into pg_proc.h
 
-CREATE FUNCTION ts_debug(IN config regconfig, IN document text,
-    OUT alias text,
-    OUT description text,
-    OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
-    OUT lexemes text[])
-RETURNS SETOF record AS
-$$
-SELECT
-    tt.alias AS alias,
-    tt.description AS description,
-    parse.token AS token,
-    ARRAY ( SELECT m.mapdict::pg_catalog.regdictionary
-            FROM pg_catalog.pg_ts_config_map AS m
-            WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-            ORDER BY m.mapseqno )
-    AS dictionaries,
-    ( SELECT mapdict::pg_catalog.regdictionary
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS dictionary,
-    ( SELECT pg_catalog.ts_lexize(mapdict, parse.token)
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS lexemes
-FROM pg_catalog.ts_parse(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 ), $2
-    ) AS parse,
-     pg_catalog.ts_token_type(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 )
-    ) AS tt
-WHERE tt.tokid = parse.tokid
-$$
-LANGUAGE SQL STRICT STABLE PARALLEL SAFE;
-
-COMMENT ON FUNCTION ts_debug(regconfig,text) IS
-    'debug function for text search configuration';
 
 CREATE FUNCTION ts_debug(IN document text,
     OUT alias text,
     OUT description text,
     OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
+    OUT dictionaries text,
+    OUT dictionary text,
     OUT lexemes text[])
 RETURNS SETOF record AS
 $$
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index adc7cd6..e74b68f 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -39,9 +39,12 @@
 #include "nodes/makefuncs.h"
 #include "parser/parse_func.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_public.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/jsonb.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "utils/syscache.h"
@@ -935,11 +938,22 @@ makeConfigurationDependencies(HeapTuple tuple, bool removeOld,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			TSMapElement *mapdicts = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			Oid		   *dictionaryOids = TSMapGetDictionaries(mapdicts);
+			Oid		   *currentOid = dictionaryOids;
 
-			referenced.classId = TSDictionaryRelationId;
-			referenced.objectId = cfgmap->mapdict;
-			referenced.objectSubId = 0;
-			add_exact_object_address(&referenced, addrs);
+			while (*currentOid != InvalidOid)
+			{
+				referenced.classId = TSDictionaryRelationId;
+				referenced.objectId = *currentOid;
+				referenced.objectSubId = 0;
+				add_exact_object_address(&referenced, addrs);
+
+				currentOid++;
+			}
+
+			pfree(dictionaryOids);
+			TSMapElementFree(mapdicts);
 		}
 
 		systable_endscan(scan);
@@ -1091,8 +1105,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			mapvalues[Anum_pg_ts_config_map_mapcfg - 1] = cfgOid;
 			mapvalues[Anum_pg_ts_config_map_maptokentype - 1] = cfgmap->maptokentype;
-			mapvalues[Anum_pg_ts_config_map_mapseqno - 1] = cfgmap->mapseqno;
-			mapvalues[Anum_pg_ts_config_map_mapdict - 1] = cfgmap->mapdict;
+			mapvalues[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(&cfgmap->mapdicts);
 
 			newmaptup = heap_form_tuple(mapRel->rd_att, mapvalues, mapnulls);
 
@@ -1195,7 +1208,7 @@ AlterTSConfiguration(AlterTSConfigurationStmt *stmt)
 	relMap = heap_open(TSConfigMapRelationId, RowExclusiveLock);
 
 	/* Add or drop mappings */
-	if (stmt->dicts)
+	if (stmt->dicts || stmt->dict_map)
 		MakeConfigurationMapping(stmt, tup, relMap);
 	else if (stmt->tokentype)
 		DropConfigurationMapping(stmt, tup, relMap);
@@ -1271,6 +1284,108 @@ getTokenTypes(Oid prsId, List *tokennames)
 	return res;
 }
 
+static TSMapElement *
+CreateCaseForSingleDictionary(Oid dictOid)
+{
+	TSMapElement *result = palloc0(sizeof(TSMapElement));
+	TSMapElement *keepElement = palloc0(sizeof(TSMapElement));
+	TSMapElement *condition = palloc0(sizeof(TSMapElement));
+	TSMapCase  *caseObject = palloc0(sizeof(TSMapCase));
+
+	keepElement->type = TSMAP_KEEP;
+	keepElement->parent = result;
+	caseObject->command = keepElement;
+	caseObject->match = true;
+
+	condition->type = TSMAP_DICTIONARY;
+	condition->parent = result;
+	condition->value.objectDictionary = dictOid;
+	caseObject->condition = condition;
+
+	result->value.objectCase = caseObject;
+	result->type = TSMAP_CASE;
+
+	return result;
+}
+
+static TSMapElement *
+ParseTSMapConfig(DictMapElem *elem)
+{
+	TSMapElement *result = palloc0(sizeof(TSMapElement));
+
+	if (elem->kind == DICT_MAP_CASE)
+	{
+		TSMapCase  *caseObject = palloc0(sizeof(TSMapCase));
+		DictMapCase *caseASTObject = elem->data;
+
+		caseObject->condition = ParseTSMapConfig(caseASTObject->condition);
+		caseObject->command = ParseTSMapConfig(caseASTObject->command);
+
+		if (caseASTObject->elsebranch)
+			caseObject->elsebranch = ParseTSMapConfig(caseASTObject->elsebranch);
+
+		caseObject->match = caseASTObject->match;
+
+		caseObject->condition->parent = result;
+		caseObject->command->parent = result;
+
+		result->type = TSMAP_CASE;
+		result->value.objectCase = caseObject;
+	}
+	else if (elem->kind == DICT_MAP_EXPRESSION)
+	{
+		TSMapExpression *expression = palloc0(sizeof(TSMapExpression));
+		DictMapExprElem *expressionAST = elem->data;
+
+		expression->left = ParseTSMapConfig(expressionAST->left);
+		expression->right = ParseTSMapConfig(expressionAST->right);
+		expression->operator = expressionAST->oper;
+
+		result->type = TSMAP_EXPRESSION;
+		result->value.objectExpression = expression;
+	}
+	else if (elem->kind == DICT_MAP_KEEP)
+	{
+		result->value.objectExpression = NULL;
+		result->type = TSMAP_KEEP;
+	}
+	else if (elem->kind == DICT_MAP_DICTIONARY)
+	{
+		result->value.objectDictionary = get_ts_dict_oid(elem->data, false);
+		result->type = TSMAP_DICTIONARY;
+	}
+	else if (elem->kind == DICT_MAP_DICTIONARY_LIST)
+	{
+		int			i = 0;
+		ListCell   *c;
+		TSMapElement *root = NULL;
+		TSMapElement *currentNode = NULL;
+
+		foreach(c, (List *) elem->data)
+		{
+			TSMapElement *prevNode = currentNode;
+			List	   *names = (List *) lfirst(c);
+			Oid			oid = get_ts_dict_oid(names, false);
+
+			currentNode = CreateCaseForSingleDictionary(oid);
+
+			if (root == NULL)
+				root = currentNode;
+			else
+			{
+				prevNode->value.objectCase->elsebranch = currentNode;
+				currentNode->parent = prevNode;
+			}
+
+			prevNode = currentNode;
+
+			i++;
+		}
+		result = root;
+	}
+	return result;
+}
+
 /*
  * ALTER TEXT SEARCH CONFIGURATION ADD/ALTER MAPPING
  */
@@ -1287,8 +1402,9 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	Oid			prsId;
 	int		   *tokens,
 				ntoken;
-	Oid		   *dictIds;
-	int			ndict;
+	Oid		   *dictIds = NULL;
+	int			ndict = 0;
+	TSMapElement *config = NULL;
 	ListCell   *c;
 
 	prsId = ((Form_pg_ts_config) GETSTRUCT(tup))->cfgparser;
@@ -1327,15 +1443,18 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	/*
 	 * Convert list of dictionary names to array of dict OIDs
 	 */
-	ndict = list_length(stmt->dicts);
-	dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
-	i = 0;
-	foreach(c, stmt->dicts)
+	if (stmt->dicts)
 	{
-		List	   *names = (List *) lfirst(c);
+		ndict = list_length(stmt->dicts);
+		dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
+		i = 0;
+		foreach(c, stmt->dicts)
+		{
+			List	   *names = (List *) lfirst(c);
 
-		dictIds[i] = get_ts_dict_oid(names, false);
-		i++;
+			dictIds[i] = get_ts_dict_oid(names, false);
+			i++;
+		}
 	}
 
 	if (stmt->replace)
@@ -1357,6 +1476,10 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			Datum		repl_val[Natts_pg_ts_config_map];
+			bool		repl_null[Natts_pg_ts_config_map];
+			bool		repl_repl[Natts_pg_ts_config_map];
+			HeapTuple	newtup;
 
 			/*
 			 * check if it's one of target token types
@@ -1380,25 +1503,21 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 			/*
 			 * replace dictionary if match
 			 */
-			if (cfgmap->mapdict == dictOld)
-			{
-				Datum		repl_val[Natts_pg_ts_config_map];
-				bool		repl_null[Natts_pg_ts_config_map];
-				bool		repl_repl[Natts_pg_ts_config_map];
-				HeapTuple	newtup;
-
-				memset(repl_val, 0, sizeof(repl_val));
-				memset(repl_null, false, sizeof(repl_null));
-				memset(repl_repl, false, sizeof(repl_repl));
-
-				repl_val[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictNew);
-				repl_repl[Anum_pg_ts_config_map_mapdict - 1] = true;
-
-				newtup = heap_modify_tuple(maptup,
-										   RelationGetDescr(relMap),
-										   repl_val, repl_null, repl_repl);
-				CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
-			}
+			config = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			TSMapReplaceDictionary(config, dictOld, dictNew);
+
+			memset(repl_val, 0, sizeof(repl_val));
+			memset(repl_null, false, sizeof(repl_null));
+			memset(repl_repl, false, sizeof(repl_repl));
+
+			repl_val[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
+			repl_repl[Anum_pg_ts_config_map_mapdicts - 1] = true;
+
+			newtup = heap_modify_tuple(maptup,
+									   RelationGetDescr(relMap),
+									   repl_val, repl_null, repl_repl);
+			CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
+			pfree(config);
 		}
 
 		systable_endscan(scan);
@@ -1408,24 +1527,22 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		/*
 		 * Insertion of new entries
 		 */
+		config = ParseTSMapConfig(stmt->dict_map);
+
 		for (i = 0; i < ntoken; i++)
 		{
-			for (j = 0; j < ndict; j++)
-			{
-				Datum		values[Natts_pg_ts_config_map];
-				bool		nulls[Natts_pg_ts_config_map];
+			Datum		values[Natts_pg_ts_config_map];
+			bool		nulls[Natts_pg_ts_config_map];
 
-				memset(nulls, false, sizeof(nulls));
-				values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
-				values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
-				values[Anum_pg_ts_config_map_mapseqno - 1] = Int32GetDatum(j + 1);
-				values[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictIds[j]);
+			memset(nulls, false, sizeof(nulls));
+			values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
+			values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
+			values[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
 
-				tup = heap_form_tuple(relMap->rd_att, values, nulls);
-				CatalogTupleInsert(relMap, tup);
+			tup = heap_form_tuple(relMap->rd_att, values, nulls);
+			CatalogTupleInsert(relMap, tup);
 
-				heap_freetuple(tup);
-			}
+			heap_freetuple(tup);
 		}
 	}
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index b1515dd..f6a776a 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4386,6 +4386,42 @@ _copyReassignOwnedStmt(const ReassignOwnedStmt *from)
 	return newnode;
 }
 
+static DictMapElem *
+_copyDictMapElem(const DictMapElem *from)
+{
+	DictMapElem *newnode = makeNode(DictMapElem);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(data);
+
+	return newnode;
+}
+
+static DictMapExprElem *
+_copyDictMapExprElem(const DictMapExprElem *from)
+{
+	DictMapExprElem *newnode = makeNode(DictMapExprElem);
+
+	COPY_NODE_FIELD(left);
+	COPY_NODE_FIELD(right);
+	COPY_SCALAR_FIELD(oper);
+
+	return newnode;
+}
+
+static DictMapCase *
+_copyDictMapCase(const DictMapCase *from)
+{
+	DictMapCase *newnode = makeNode(DictMapCase);
+
+	COPY_NODE_FIELD(condition);
+	COPY_NODE_FIELD(command);
+	COPY_NODE_FIELD(elsebranch);
+	COPY_SCALAR_FIELD(match);
+
+	return newnode;
+}
+
 static AlterTSDictionaryStmt *
 _copyAlterTSDictionaryStmt(const AlterTSDictionaryStmt *from)
 {
@@ -5393,6 +5429,15 @@ copyObjectImpl(const void *from)
 		case T_ReassignOwnedStmt:
 			retval = _copyReassignOwnedStmt(from);
 			break;
+		case T_DictMapExprElem:
+			retval = _copyDictMapExprElem(from);
+			break;
+		case T_DictMapElem:
+			retval = _copyDictMapElem(from);
+			break;
+		case T_DictMapCase:
+			retval = _copyDictMapCase(from);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _copyAlterTSDictionaryStmt(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 2e869a9..05a056b 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2187,6 +2187,36 @@ _equalReassignOwnedStmt(const ReassignOwnedStmt *a, const ReassignOwnedStmt *b)
 }
 
 static bool
+_equalDictMapElem(const DictMapElem *a, const DictMapElem *b)
+{
+	COMPARE_NODE_FIELD(data);
+	COMPARE_SCALAR_FIELD(kind);
+
+	return true;
+}
+
+static bool
+_equalDictMapExprElem(const DictMapExprElem *a, const DictMapExprElem *b)
+{
+	COMPARE_NODE_FIELD(left);
+	COMPARE_NODE_FIELD(right);
+	COMPARE_SCALAR_FIELD(oper);
+
+	return true;
+}
+
+static bool
+_equalDictMapCase(const DictMapCase *a, const DictMapCase *b)
+{
+	COMPARE_NODE_FIELD(condition);
+	COMPARE_NODE_FIELD(command);
+	COMPARE_NODE_FIELD(elsebranch);
+	COMPARE_SCALAR_FIELD(match);
+
+	return true;
+}
+
+static bool
 _equalAlterTSDictionaryStmt(const AlterTSDictionaryStmt *a, const AlterTSDictionaryStmt *b)
 {
 	COMPARE_NODE_FIELD(dictname);
@@ -3532,6 +3562,15 @@ equal(const void *a, const void *b)
 		case T_ReassignOwnedStmt:
 			retval = _equalReassignOwnedStmt(a, b);
 			break;
+		case T_DictMapExprElem:
+			retval = _equalDictMapExprElem(a, b);
+			break;
+		case T_DictMapElem:
+			retval = _equalDictMapElem(a, b);
+			break;
+		case T_DictMapCase:
+			retval = _equalDictMapCase(a, b);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _equalAlterTSDictionaryStmt(a, b);
 			break;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index ebfc94f..3ab0b75 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -52,6 +52,7 @@
 #include "catalog/namespace.h"
 #include "catalog/pg_am.h"
 #include "catalog/pg_trigger.h"
+#include "catalog/pg_ts_config_map.h"
 #include "commands/defrem.h"
 #include "commands/trigger.h"
 #include "nodes/makefuncs.h"
@@ -241,6 +242,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionSpec		*partspec;
 	PartitionBoundSpec	*partboundspec;
 	RoleSpec			*rolespec;
+	DictMapElem			*dmapelem;
 }
 
 %type <node>	stmt schema_stmt
@@ -308,7 +310,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <ival>	vacuum_option_list vacuum_option_elem
 %type <boolean>	opt_or_replace
 				opt_grant_grant_option opt_grant_admin_option
-				opt_nowait opt_if_exists opt_with_data
+				opt_nowait opt_if_exists opt_with_data opt_dictionary_map_no
 %type <ival>	opt_nowait_or_skip
 
 %type <list>	OptRoleList AlterOptRoleList
@@ -396,8 +398,8 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				relation_expr_list dostmt_opt_list
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
-				publication_name_list
 				vacuum_relation_list opt_vacuum_relation_list
+				publication_name_list
 
 %type <list>	group_by_list
 %type <node>	group_by_item empty_grouping_set rollup_clause cube_clause
@@ -582,6 +584,12 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>		hash_partbound partbound_datum_list range_datum_list
 %type <defelt>		hash_partbound_elem
 
+%type <ival>		dictionary_map_set_expr_operator
+%type <dmapelem>	dictionary_map_dict dictionary_map_command_expr_paren
+					dictionary_map_set_expr dictionary_map_case
+					dictionary_map_action dictionary_map
+					opt_dictionary_map_case_else dictionary_config
+
 /*
  * Non-keyword token types.  These are hard-wired into the "flex" lexer.
  * They must be listed first so that their numeric codes do not depend on
@@ -643,13 +651,14 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	JOIN
 
-	KEY
+	KEEP KEY
 
 	LABEL LANGUAGE LARGE_P LAST_P LATERAL_P
 	LEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL
 	LOCALTIME LOCALTIMESTAMP LOCATION LOCK_P LOCKED LOGGED
 
-	MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE MONTH_P MOVE
+	MAP MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE
+	MONTH_P MOVE
 
 	NAME_P NAMES NATIONAL NATURAL NCHAR NEW NEXT NO NONE
 	NOT NOTHING NOTIFY NOTNULL NOWAIT NULL_P NULLIF
@@ -10318,24 +10327,26 @@ AlterTSDictionaryStmt:
 		;
 
 AlterTSConfigurationStmt:
-			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with any_name_list
+			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ADD_MAPPING;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = false;
 					n->replace = false;
 					$$ = (Node*)n;
 				}
-			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with any_name_list
+			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ALTER_MAPPING_FOR_TOKEN;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = true;
 					n->replace = false;
 					$$ = (Node*)n;
@@ -10387,6 +10398,111 @@ any_with:	WITH									{}
 			| WITH_LA								{}
 		;
 
+opt_dictionary_map_no:
+			NO { $$ = true; }
+			| { $$ = false; }
+		;
+
+dictionary_config:
+			dictionary_map { $$ = $1; }
+			| any_name_list ',' any_name
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_DICTIONARY_LIST;
+				n->data = lappend($1, $3);
+				$$ = n;
+			}
+		;
+
+dictionary_map:
+			dictionary_map_case { $$ = $1; }
+			| dictionary_map_set_expr { $$ = $1; }
+		;
+
+dictionary_map_action:
+			KEEP
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_KEEP;
+				n->data = NULL;
+				$$ = n;
+			}
+			| dictionary_map { $$ = $1; }
+		;
+
+opt_dictionary_map_case_else:
+			ELSE dictionary_map { $$ = $2; }
+			| { $$ = NULL; }
+		;
+
+dictionary_map_case:
+			CASE dictionary_map WHEN opt_dictionary_map_no MATCH THEN dictionary_map_action opt_dictionary_map_case_else END_P
+			{
+				DictMapCase *n = makeNode(DictMapCase);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->condition = $2;
+				n->command = $7;
+				n->elsebranch = $8;
+				n->match = !$4;
+
+				r->kind = DICT_MAP_CASE;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_set_expr_operator:
+			UNION { $$ = TSMAP_OP_UNION; }
+			| EXCEPT { $$ = TSMAP_OP_EXCEPT; }
+			| INTERSECT { $$ = TSMAP_OP_INTERSECT; }
+			| MAP { $$ = TSMAP_OP_MAP; }
+		;
+
+dictionary_map_set_expr:
+			dictionary_map_command_expr_paren { $$ = $1; }
+			| dictionary_map_case dictionary_map_set_expr_operator dictionary_map_case
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = $2;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+			| dictionary_map_command_expr_paren dictionary_map_set_expr_operator dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = $2;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_command_expr_paren:
+			'(' dictionary_map_set_expr ')'	{ $$ = $2; }
+			| dictionary_map_dict			{ $$ = $1; }
+		;
+
+dictionary_map_dict:
+			any_name
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_DICTIONARY;
+				n->data = $1;
+				$$ = n;
+			}
+		;
 
 /*****************************************************************************
  *
@@ -15042,6 +15158,7 @@ unreserved_keyword:
 			| LOCK_P
 			| LOCKED
 			| LOGGED
+			| MAP
 			| MAPPING
 			| MATCH
 			| MATERIALIZED
@@ -15346,6 +15463,7 @@ reserved_keyword:
 			| INITIALLY
 			| INTERSECT
 			| INTO
+			| KEEP
 			| LATERAL_P
 			| LEADING
 			| LIMIT
diff --git a/src/backend/tsearch/Makefile b/src/backend/tsearch/Makefile
index 34fe4c5..24e47f2 100644
--- a/src/backend/tsearch/Makefile
+++ b/src/backend/tsearch/Makefile
@@ -26,7 +26,7 @@ DICTFILES_PATH=$(addprefix dicts/,$(DICTFILES))
 OBJS = ts_locale.o ts_parse.o wparser.o wparser_def.o dict.o \
 	dict_simple.o dict_synonym.o dict_thesaurus.o \
 	dict_ispell.o regis.o spell.o \
-	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o
+	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o ts_configmap.o
 
 include $(top_srcdir)/src/backend/common.mk
 
diff --git a/src/backend/tsearch/ts_configmap.c b/src/backend/tsearch/ts_configmap.c
new file mode 100644
index 0000000..7971f46
--- /dev/null
+++ b/src/backend/tsearch/ts_configmap.c
@@ -0,0 +1,1044 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.c
+ *		internal represtation of text search configuration and utilities for it
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/tsearch/ts_confimap.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include <ctype.h>
+
+#include "access/heapam.h"
+#include "access/genam.h"
+#include "access/htup_details.h"
+#include "access/sysattr.h"
+#include "catalog/indexing.h"
+#include "catalog/pg_ts_dict.h"
+#include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "utils/fmgroids.h"
+
+/*
+ * Size selected arbitary, based on assumption that 1024 frames of stack
+ * is enouth for parsing of configurations
+ */
+#define JSONB_PARSE_STATE_STACK_SIZE 1024
+
+/*
+ * Used during the parsing of TSMapElement from JSONB into internal
+ * datastructures.
+ */
+typedef enum TSMapParseState
+{
+	TSMPS_WAIT_ELEMENT,
+	TSMPS_READ_DICT_OID,
+	TSMPS_READ_COMPLEX_OBJ,
+	TSMPS_READ_EXPRESSION,
+	TSMPS_READ_CASE,
+	TSMPS_READ_OPERATOR,
+	TSMPS_READ_COMMAND,
+	TSMPS_READ_CONDITION,
+	TSMPS_READ_ELSEBRANCH,
+	TSMPS_READ_MATCH,
+	TSMPS_READ_KEEP,
+	TSMPS_READ_LEFT,
+	TSMPS_READ_RIGHT
+} TSMapParseState;
+
+/*
+ * Context used during Jsonb parsing to construct a TSMap
+ */
+typedef struct TSMapJsonbParseData
+{
+	TSMapParseState states[JSONB_PARSE_STATE_STACK_SIZE];	/* Stack of states of
+															 * JSONB parsing
+															 * automaton */
+	int			statesIndex;	/* Index of current stack frame */
+	TSMapElement *element;		/* Element that is in cnstruction now */
+} TSMapJsonbParseData;
+
+static JsonbValue *TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState);
+static TSMapElement * JsonbToTSMapElement(JsonbContainer *root);
+
+/*
+ * Print name of the dictionary into StringInfo variable result
+ */
+static void
+TSMapPrintDictName(Oid dictId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_ts_dict dict;
+
+	maprel = heap_open(TSDictionaryRelationId, AccessShareLock);
+	mapidx = index_open(TSDictionaryOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(dictId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	dict = (Form_pg_ts_dict) GETSTRUCT(maptup);
+	appendStringInfoString(result, dict->dictname.data);
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print the expression into StringInfo variable result
+ */
+static void
+TSMapPrintExpression(TSMapExpression *expression, StringInfo result)
+{
+
+	if (expression->left)
+		TSMapPrintElement(expression->left, result);
+
+	switch (expression->operator)
+	{
+		case TSMAP_OP_UNION:
+			appendStringInfoString(result, " UNION ");
+			break;
+		case TSMAP_OP_EXCEPT:
+			appendStringInfoString(result, " EXCEPT ");
+			break;
+		case TSMAP_OP_INTERSECT:
+			appendStringInfoString(result, " INTERSECT ");
+			break;
+		case TSMAP_OP_MAP:
+			appendStringInfoString(result, " MAP ");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains invalid expression operator.")));
+			break;
+	}
+
+	if (expression->right)
+		TSMapPrintElement(expression->right, result);
+}
+
+/*
+ * Print the case configuration construction into StringInfo variable result
+ */
+static void
+TSMapPrintCase(TSMapCase *caseObject, StringInfo result)
+{
+	appendStringInfoString(result, "CASE ");
+
+	TSMapPrintElement(caseObject->condition, result);
+
+	appendStringInfoString(result, " WHEN ");
+	if (!caseObject->match)
+		appendStringInfoString(result, "NO ");
+	appendStringInfoString(result, "MATCH THEN ");
+
+	TSMapPrintElement(caseObject->command, result);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		appendStringInfoString(result, "\nELSE ");
+		TSMapPrintElement(caseObject->elsebranch, result);
+	}
+	appendStringInfoString(result, "\nEND");
+}
+
+/*
+ * Print the element into StringInfo result.
+ * Uses other function and serves for element type detection.
+ */
+void
+TSMapPrintElement(TSMapElement *element, StringInfo result)
+{
+	switch (element->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapPrintExpression(element->value.objectExpression, result);
+			break;
+		case TSMAP_DICTIONARY:
+			TSMapPrintDictName(element->value.objectDictionary, result);
+			break;
+		case TSMAP_CASE:
+			TSMapPrintCase(element->value.objectCase, result);
+			break;
+		case TSMAP_KEEP:
+			appendStringInfoString(result, "KEEP");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains elements with invalid type.")));
+			break;
+	}
+}
+
+/*
+ * Print the text search configuration as a text.
+ */
+Datum
+dictionary_mapping_to_text(PG_FUNCTION_ARGS)
+{
+	Oid			cfgOid = PG_GETARG_OID(0);
+	int32		tokentype = PG_GETARG_INT32(1);
+	StringInfo	rawResult;
+	text	   *result = NULL;
+	TSConfigCacheEntry *cacheEntry;
+
+	cacheEntry = lookup_ts_config_cache(cfgOid);
+	rawResult = makeStringInfo();
+	initStringInfo(rawResult);
+
+	if (cacheEntry->lenmap > tokentype && cacheEntry->map[tokentype] != NULL)
+	{
+		TSMapElement *element = cacheEntry->map[tokentype];
+
+		TSMapPrintElement(element, rawResult);
+	}
+
+	result = cstring_to_text(rawResult->data);
+	pfree(rawResult);
+	PG_RETURN_TEXT_P(result);
+}
+
+/* ----------------
+ * Functions used to convert TSMap structure into Jsonb representation
+ * ----------------
+ */
+
+/*
+ * Convert an integer value into JsonbValue
+ */
+static JsonbValue *
+IntToJsonbValue(int intValue)
+{
+	char		buffer[16];
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	/*
+	 * String size is based on limit of int capacity up to 12 chars with sign
+	 * and NULL-character
+	 */
+	memset(buffer, 0, sizeof(char) * 12);
+
+	pg_ltoa(intValue, buffer);
+	value->type = jbvNumeric;
+	value->val.numeric = DatumGetNumeric(DirectFunctionCall3(numeric_in,
+															 CStringGetDatum(buffer),
+															 ObjectIdGetDatum(InvalidOid),
+															 Int32GetDatum(-1)
+															 ));
+	return value;
+}
+
+/*
+ * Convert a FTS configuration expression into JsonbValue
+ */
+static JsonbValue *
+TSMapExpressionToJsonbValue(TSMapExpression *expression, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("operator");
+	key.val.string.val = "operator";
+	value = IntToJsonbValue(expression->operator);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("left");
+	key.val.string.val = "left";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->left, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("right");
+	key.val.string.val = "right";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->right, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS configuration case into JsonbValue
+ */
+static JsonbValue *
+TSMapCaseToJsonbValue(TSMapCase *caseObject, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("condition");
+	key.val.string.val = "condition";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->condition, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("command");
+	key.val.string.val = "command";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->command, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		key.type = jbvString;
+		key.val.string.len = strlen("elsebranch");
+		key.val.string.val = "elsebranch";
+
+		pushJsonbValue(&jsonbState, WJB_KEY, &key);
+		value = TSMapElementToJsonbValue(caseObject->elsebranch, jsonbState);
+
+		if (value && IsAJsonbScalar(value))
+			pushJsonbValue(&jsonbState, WJB_VALUE, value);
+	}
+
+	key.type = jbvString;
+	key.val.string.len = strlen("match");
+	key.val.string.val = "match";
+
+	value = IntToJsonbValue(caseObject->match ? 1 : 0);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS KEEP command into JsonbValue
+ */
+static JsonbValue *
+TSMapKeepToJsonbValue(JsonbParseState *jsonbState)
+{
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	value->type = jbvString;
+	value->val.string.len = strlen("keep");
+	value->val.string.val = "keep";
+
+	return pushJsonbValue(&jsonbState, WJB_VALUE, value);
+}
+
+/*
+ * Convert a FTS element into JsonbValue. Common point for all types of TSMapElement
+ */
+JsonbValue *
+TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState)
+{
+	JsonbValue *result = NULL;
+
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_EXPRESSION:
+				result = TSMapExpressionToJsonbValue(element->value.objectExpression, jsonbState);
+				break;
+			case TSMAP_DICTIONARY:
+				result = IntToJsonbValue(element->value.objectDictionary);
+				break;
+			case TSMAP_CASE:
+				result = TSMapCaseToJsonbValue(element->value.objectCase, jsonbState);
+				break;
+			case TSMAP_KEEP:
+				result = TSMapKeepToJsonbValue(jsonbState);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Required text search configuration contains elements with invalid type.")));
+				break;
+		}
+	}
+	return result;
+}
+
+/*
+ * Convert a FTS configuration into Jsonb
+ */
+Jsonb *
+TSMapToJsonb(TSMapElement *element)
+{
+	JsonbParseState *jsonbState = NULL;
+	JsonbValue *out;
+	Jsonb	   *result;
+
+	out = TSMapElementToJsonbValue(element, jsonbState);
+
+	result = JsonbValueToJsonb(out);
+	return result;
+}
+
+/* ----------------
+ * Functions used to get TSMap structure from Jsonb representation
+ * ----------------
+ */
+
+/*
+ * Extract an integer from JsonbValue
+ */
+static int
+JsonbValueToInt(JsonbValue *value)
+{
+	char	   *str;
+
+	str = DatumGetCString(DirectFunctionCall1(numeric_out, NumericGetDatum(value->val.numeric)));
+	return pg_atoi(str, sizeof(int), 0);
+}
+
+/*
+ * Check is a key one of FTS configuration case fields
+ */
+static bool
+IsTSMapCaseKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for apropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "match") == 0 || strcmp(key, "condition") == 0 || strcmp(key, "command") == 0 || strcmp(key, "elsebranch") == 0;
+}
+
+/*
+ * Check is a key one of FTS configuration expression fields
+ */
+static bool
+IsTSMapExpressionKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for apropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "operator") == 0 || strcmp(key, "left") == 0 || strcmp(key, "right") == 0;
+}
+
+/*
+ * Configure parseData->element according to value (key)
+ */
+static void
+JsonbBeginObjectKey(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *parentElement = parseData->element;
+
+	parseData->element = palloc0(sizeof(TSMapElement));
+	parseData->element->parent = parentElement;
+
+	/* Overwrite object-type state based on key */
+	if (IsTSMapExpressionKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_EXPRESSION;
+		parseData->element->type = TSMAP_EXPRESSION;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapExpression));
+	}
+	else if (IsTSMapCaseKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CASE;
+		parseData->element->type = TSMAP_CASE;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapCase));
+	}
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration expression
+ */
+static void
+JsonbKeyExpressionProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for apropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "operator") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_OPERATOR;
+	else if (strcmp(key, "left") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_LEFT;
+	else if (strcmp(key, "right") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_RIGHT;
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration case
+ */
+static void
+JsonbKeyCaseProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for apropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "condition") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CONDITION;
+	else if (strcmp(key, "command") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_COMMAND;
+	else if (strcmp(key, "elsebranch") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_ELSEBRANCH;
+	else if (strcmp(key, "match") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_MATCH;
+}
+
+/*
+ * Convert a JsonbValue into OID TSMapElement
+ */
+static TSMapElement *
+JsonbValueToOidElement(JsonbValue *value, TSMapElement *parent)
+{
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	element->type = TSMAP_DICTIONARY;
+	element->value.objectDictionary = JsonbValueToInt(value);
+	return element;
+}
+
+/*
+ * Convert a JsonbValue into string TSMapElement.
+ * Used for special values such as KEEP command
+ */
+static TSMapElement *
+JsonbValueReadString(JsonbValue *value, TSMapElement *parent)
+{
+	char	   *str;
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	str = palloc0(sizeof(char) * (value->val.string.len + 1));
+	memcpy(str, value->val.string.val, sizeof(char) * value->val.string.len);
+
+	if (strcmp(str, "keep") == 0)
+		element->type = TSMAP_KEEP;
+
+	pfree(str);
+
+	return element;
+}
+
+/*
+ * Process a JsonbValue object
+ */
+static void
+JsonbProcessElement(JsonbIteratorToken r, JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *element;
+
+	switch (r)
+	{
+		case WJB_KEY:
+
+			/*
+			 * Construct an TSMapElement object. At first key inside Jsonb
+			 * object a type is selected based on key.
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMPLEX_OBJ)
+				JsonbBeginObjectKey(value, parseData);
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_EXPRESSION)
+				JsonbKeyExpressionProcessing(value, parseData);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CASE)
+				JsonbKeyCaseProcessing(value, parseData);
+
+			break;
+		case WJB_BEGIN_OBJECT:
+
+			/*
+			 * Begin construction of new object
+			 */
+			parseData->statesIndex++;
+			parseData->states[parseData->statesIndex] = TSMPS_READ_COMPLEX_OBJ;
+			break;
+		case WJB_END_OBJECT:
+
+			/*
+			 * Save constructed object based on current state of parser
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->parent->value.objectExpression->left = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->parent->value.objectExpression->right = parseData->element;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->parent->value.objectCase->condition = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->parent->value.objectCase->command = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->parent->value.objectCase->elsebranch = parseData->element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_VALUE:
+
+			/*
+			 * Save a value inside constructing object
+			 */
+			if (value.type == jbvBinary)
+				element = JsonbToTSMapElement(value.val.binary.data);
+			else if (value.type == jbvString)
+				element = JsonbValueReadString(&value, parseData->element);
+			else if (value.type == jbvNumeric)
+				element = JsonbValueToOidElement(&value, parseData->element);
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->value.objectCase->condition = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->value.objectCase->command = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->value.objectCase->elsebranch = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_MATCH)
+				parseData->element->value.objectCase->match = JsonbValueToInt(&value) == 1 ? true : false;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_OPERATOR)
+				parseData->element->value.objectExpression->operator = JsonbValueToInt(&value);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->value.objectExpression->left = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->value.objectExpression->right = element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_ELEM:
+
+			/*
+			 * Store a simple element such as dictionary OID
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_WAIT_ELEMENT)
+			{
+				if (parseData->element != NULL)
+					parseData->element = JsonbValueToOidElement(&value, parseData->element->parent);
+				else
+					parseData->element = JsonbValueToOidElement(&value, NULL);
+			}
+			break;
+		default:
+			/* Ignore unused Jsonb tokens */
+			break;
+	}
+}
+
+/*
+ * Convert a JsonbContainer into TSMapElement
+ */
+static TSMapElement *
+JsonbToTSMapElement(JsonbContainer *root)
+{
+	TSMapJsonbParseData parseData;
+	JsonbIteratorToken r;
+	JsonbIterator *it;
+	JsonbValue	val;
+
+	parseData.statesIndex = 0;
+	parseData.states[parseData.statesIndex] = TSMPS_WAIT_ELEMENT;
+	parseData.element = NULL;
+
+	it = JsonbIteratorInit(root);
+
+	while ((r = JsonbIteratorNext(&it, &val, true)) != WJB_DONE)
+		JsonbProcessElement(r, val, &parseData);
+
+	return parseData.element;
+}
+
+/*
+ * Convert a Jsonb into TSMapElement
+ */
+TSMapElement *
+JsonbToTSMap(Jsonb *json)
+{
+	JsonbContainer *root = &json->root;
+
+	return JsonbToTSMapElement(root);
+}
+
+/* ----------------
+ * Text Search Configuration Map Utils
+ * ----------------
+ */
+
+/*
+ * Dynamicly extendable list of OIDs
+ */
+typedef struct OidList
+{
+	Oid		   *data;
+	int			size;			/* Size of data array. Uninitialized elemenets
+								 * in data filled with InvalidOid */
+} OidList;
+
+/*
+ * Initialize a list
+ */
+static OidList *
+OidListInit()
+{
+	OidList    *result = palloc0(sizeof(OidList));
+
+	result->size = 1;
+	result->data = palloc0(result->size * sizeof(Oid));
+	result->data[0] = InvalidOid;
+	return result;
+}
+
+/*
+ * Add a new OID into list. If it is already stored in list, it won't be add second time.
+ */
+static void
+OidListAdd(OidList *list, Oid oid)
+{
+	int			i;
+
+	/* Search for the Oid in the list */
+	for (i = 0; list->data[i] != InvalidOid; i++)
+		if (list->data[i] == oid)
+			return;
+
+	/* If not found, insert it in the end of the list */
+	i++;
+	if (i == list->size)
+	{
+		int			j;
+
+		list->size = list->size * 2;
+		list->data = repalloc(list->data, sizeof(Oid) * list->size);
+
+		for (j = i; j < list->size; j++)
+			list->data[j] = InvalidOid;
+	}
+	list->data[i] = oid;
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement.
+ * Used for internal recursive calls.
+ */
+static void
+TSMapGetDictionariesInternal(TSMapElement *config, OidList *list)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapGetDictionariesInternal(config->value.objectExpression->left, list);
+			TSMapGetDictionariesInternal(config->value.objectExpression->right, list);
+			break;
+		case TSMAP_CASE:
+			TSMapGetDictionariesInternal(config->value.objectCase->command, list);
+			TSMapGetDictionariesInternal(config->value.objectCase->condition, list);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapGetDictionariesInternal(config->value.objectCase->elsebranch, list);
+			break;
+		case TSMAP_DICTIONARY:
+			OidListAdd(list, config->value.objectDictionary);
+			break;
+	}
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement
+ */
+Oid *
+TSMapGetDictionaries(TSMapElement *config)
+{
+	Oid		   *result;
+	OidList    *list = OidListInit();
+
+	TSMapGetDictionariesInternal(config, list);
+
+	result = list->data;
+	pfree(list);
+
+	return result;
+}
+
+/*
+ * Replace one dictionary OID with another in all instances inside a configuration
+ */
+void
+TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapReplaceDictionary(config->value.objectExpression->left, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectExpression->right, oldDict, newDict);
+			break;
+		case TSMAP_CASE:
+			TSMapReplaceDictionary(config->value.objectCase->command, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectCase->condition, oldDict, newDict);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapReplaceDictionary(config->value.objectCase->elsebranch, oldDict, newDict);
+			break;
+		case TSMAP_DICTIONARY:
+			if (config->value.objectDictionary == oldDict)
+				config->value.objectDictionary = newDict;
+			break;
+	}
+}
+
+/* ----------------
+ * Text Search Configuration Map Memory Management
+ * ----------------
+ */
+
+/*
+ * Move a FTS configuration expression to another memory context
+ */
+static TSMapElement *
+TSMapExpressionMoveToMemoryContext(TSMapExpression *expression, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapExpression *resultExpression = MemoryContextAlloc(context, sizeof(TSMapExpression));
+
+	memset(resultExpression, 0, sizeof(TSMapExpression));
+	result->value.objectExpression = resultExpression;
+	result->type = TSMAP_EXPRESSION;
+
+	resultExpression->operator = expression->operator;
+
+	resultExpression->left = TSMapMoveToMemoryContext(expression->left, context);
+	resultExpression->left->parent = result;
+
+	resultExpression->right = TSMapMoveToMemoryContext(expression->right, context);
+	resultExpression->right->parent = result;
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration case to another memory context
+ */
+static TSMapElement *
+TSMapCaseMoveToMemoryContext(TSMapCase *caseObject, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapCase  *resultCaseObject = MemoryContextAlloc(context, sizeof(TSMapCase));
+
+	memset(resultCaseObject, 0, sizeof(TSMapCase));
+	result->value.objectCase = resultCaseObject;
+	result->type = TSMAP_CASE;
+
+	resultCaseObject->match = caseObject->match;
+
+	resultCaseObject->command = TSMapMoveToMemoryContext(caseObject->command, context);
+	resultCaseObject->command->parent = result;
+
+	resultCaseObject->condition = TSMapMoveToMemoryContext(caseObject->condition, context);
+	resultCaseObject->condition->parent = result;
+
+	if (caseObject->elsebranch != NULL)
+	{
+		resultCaseObject->elsebranch = TSMapMoveToMemoryContext(caseObject->elsebranch, context);
+		resultCaseObject->elsebranch->parent = result;
+	}
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration to another memory context
+ */
+TSMapElement *
+TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context)
+{
+	TSMapElement *result = NULL;
+
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			result = TSMapExpressionMoveToMemoryContext(config->value.objectExpression, context);
+			break;
+		case TSMAP_CASE:
+			result = TSMapCaseMoveToMemoryContext(config->value.objectCase, context);
+			break;
+		case TSMAP_DICTIONARY:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_DICTIONARY;
+			result->value.objectDictionary = config->value.objectDictionary;
+			break;
+		case TSMAP_KEEP:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_KEEP;
+			result->value.object = NULL;
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains object with invalid type.")));
+			break;
+	}
+
+	return result;
+}
+
+/*
+ * Free memory ocupied by FTS configuration expression
+ */
+static void
+TSMapExpressionFree(TSMapExpression *expression)
+{
+	if (expression->left)
+		TSMapElementFree(expression->left);
+	if (expression->right)
+		TSMapElementFree(expression->right);
+	pfree(expression);
+}
+
+/*
+ * Free memory ocupied by FTS configuration case
+ */
+static void
+TSMapCaseFree(TSMapCase *caseObject)
+{
+	TSMapElementFree(caseObject->condition);
+	TSMapElementFree(caseObject->command);
+	TSMapElementFree(caseObject->elsebranch);
+	pfree(caseObject);
+}
+
+/*
+ * Free memory ocupied by FTS configuration element
+ */
+void
+TSMapElementFree(TSMapElement *element)
+{
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_CASE:
+				TSMapCaseFree(element->value.objectCase);
+				break;
+			case TSMAP_EXPRESSION:
+				TSMapExpressionFree(element->value.objectExpression);
+				break;
+		}
+		pfree(element);
+	}
+}
+
+/*
+ * Do a deep comparison of two TSMapElements. Doesn't check parents of elements
+ */
+bool
+TSMapElementEquals(TSMapElement *a, TSMapElement *b)
+{
+	bool		result = true;
+
+	if (a->type == b->type)
+	{
+		switch (a->type)
+		{
+			case TSMAP_CASE:
+				if (!TSMapElementEquals(a->value.objectCase->condition, b->value.objectCase->condition))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectCase->command, b->value.objectCase->command))
+					result = false;
+
+				if (a->value.objectCase->elsebranch != NULL && b->value.objectCase->elsebranch != NULL)
+				{
+					if (!TSMapElementEquals(a->value.objectCase->elsebranch, b->value.objectCase->elsebranch))
+						result = false;
+				}
+				else if (a->value.objectCase->elsebranch != NULL || b->value.objectCase->elsebranch != NULL)
+					result = false;
+
+				if (a->value.objectCase->match != b->value.objectCase->match)
+					result = false;
+				break;
+			case TSMAP_EXPRESSION:
+				if (!TSMapElementEquals(a->value.objectExpression->left, b->value.objectExpression->left))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectExpression->right, b->value.objectExpression->right))
+					result = false;
+				if (a->value.objectExpression->operator != b->value.objectExpression->operator)
+					result = false;
+				break;
+			case TSMAP_DICTIONARY:
+				result = a->value.objectDictionary == b->value.objectDictionary;
+				break;
+			case TSMAP_KEEP:
+				result = true;
+		}
+	}
+	else
+		result = false;
+
+	return result;
+}
diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c
index ad5dddf..748580d 100644
--- a/src/backend/tsearch/ts_parse.c
+++ b/src/backend/tsearch/ts_parse.c
@@ -16,19 +16,30 @@
 
 #include "tsearch/ts_cache.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "funcapi.h"
 
 #define IGNORE_LONGLEXEME	1
 
-/*
+/*-------------------
  * Lexize subsystem
+ *-------------------
  */
 
 typedef struct ParsedLex
 {
-	int			type;
-	char	   *lemm;
-	int			lenlemm;
-	struct ParsedLex *next;
+	int			type;			/* Token type */
+	char	   *lemm;			/* Token itself */
+	int			lenlemm;		/* Length of the token string */
+	int			maplen;			/* Length of the map */
+	bool	   *accepted;		/* Is accepted by some dictionary */
+	bool	   *rejected;		/* Is rejected by all dictionaries */
+	bool	   *notFinished;	/* Some dictionary not finished processing and
+								 * waits for more tokens */
+	struct ParsedLex *next;		/* Next token in the list */
+	TSMapElement *relatedRule;	/* Rule which is used to produce lexemes from
+								 * the token */
 } ParsedLex;
 
 typedef struct ListParsedLex
@@ -37,318 +48,1365 @@ typedef struct ListParsedLex
 	ParsedLex  *tail;
 } ListParsedLex;
 
-typedef struct
+typedef struct DictState
 {
-	TSConfigCacheEntry *cfg;
-	Oid			curDictId;
-	int			posDict;
-	DictSubState dictState;
-	ParsedLex  *curSub;
-	ListParsedLex towork;		/* current list to work */
-	ListParsedLex waste;		/* list of lexemes that already lexized */
+	Oid			relatedDictionary;	/* DictState contains state of dictionary
+									 * with this Oid */
+	DictSubState subState;		/* Internal state of the dictionary used to
+								 * store some state between dictionary calls */
+	ListParsedLex acceptedTokens;	/* Tokens which are processed and
+									 * accepted, used in last returned result
+									 * by the dictionary */
+	ListParsedLex intermediateTokens;	/* Tokens which are not accepted, but
+										 * were processed by thesaurus-like
+										 * dictionry */
+	bool		storeToAccepted;	/* Should current token be appended to
+									 * accepted or intermediate tokens */
+	bool		processed;		/* Is the dictionary take control during
+								 * current token processing */
+	TSLexeme   *tmpResult;		/* Last result retued by thesaurus-like
+								 * dictionary, if dictionary still waiting for
+								 * more lexemes */
+} DictState;
+
+typedef struct DictStateList
+{
+	int			listLength;
+	DictState  *states;
+} DictStateList;
 
-	/*
-	 * fields to store last variant to lexize (basically, thesaurus or similar
-	 * to, which wants	several lexemes
-	 */
+typedef struct LexemesBufferEntry
+{
+	Oid			dictId;
+	TSMapElement *key;
+	ParsedLex  *token;
+	TSLexeme   *data;
+} LexemesBufferEntry;
 
-	ParsedLex  *lastRes;
-	TSLexeme   *tmpRes;
+typedef struct LexemesBuffer
+{
+	int			size;
+	LexemesBufferEntry *data;
+} LexemesBuffer;
+
+typedef struct ResultStorage
+{
+	TSLexeme   *lexemes;		/* Processed lexemes, which is not yet
+								 * accepted */
+	TSLexeme   *accepted;
+} ResultStorage;
+
+typedef struct LexizeData
+{
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	DictStateList dslist;		/* List of all currently stored states of
+								 * dictionaries */
+	ListParsedLex towork;		/* Current list to work */
+	ListParsedLex waste;		/* List of lexemes that already lexized */
+	LexemesBuffer buffer;		/* Buffer of processed lexemes. Used to avoid
+								 * multiple execution of token lexize process
+								 * with same parameters */
+	ResultStorage delayedResults;	/* Results that should be returned but may
+									 * be rejected in future */
+	Oid			skipDictionary; /* The dictionary we should skip during
+								 * processing. Used to avoid infinite loop in
+								 * configuration with pharase dictionary */
 } LexizeData;
 
+typedef struct TSDebugContext
+{
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	TSParserCacheEntry *prsobj; /* Parser context of current ts_debug context */
+	LexDescr   *tokenTypes;		/* Token types supported by current parser */
+	void	   *prsdata;		/* Parser data of current ts_debug context */
+	LexizeData	ldata;			/* Lexize data of current ts_debug context */
+	int			tokentype;		/* Last token tokentype */
+	TSLexeme   *savedLexemes;	/* Last token lexemes stored for ts_debug
+								 * output */
+	ParsedLex  *leftTokens;		/* Corresponded ParsedLex */
+} TSDebugContext;
+
+static TSLexeme *TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression);
+static TSLexeme *LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config);
+
+/*-------------------
+ * ListParsedLex API
+ *-------------------
+ */
+
+/*
+ * Add a ParsedLex to the end of the list
+ */
+static void
+LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
+{
+	if (list->tail)
+	{
+		list->tail->next = newpl;
+		list->tail = newpl;
+	}
+	else
+		list->head = list->tail = newpl;
+	newpl->next = NULL;
+}
+
+/*
+ * Add a copy of ParsedLex to the end of the list
+ */
+static void
+LPLAddTailCopy(ListParsedLex *list, ParsedLex *newpl)
+{
+	ParsedLex  *copy = palloc0(sizeof(ParsedLex));
+
+	copy->lenlemm = newpl->lenlemm;
+	copy->type = newpl->type;
+	copy->lemm = newpl->lemm;
+	copy->relatedRule = newpl->relatedRule;
+	copy->next = NULL;
+
+	if (list->tail)
+	{
+		list->tail->next = copy;
+		list->tail = copy;
+	}
+	else
+		list->head = list->tail = copy;
+}
+
+/*
+ * Remove the head of the list. Return pointer to detached head
+ */
+static ParsedLex *
+LPLRemoveHead(ListParsedLex *list)
+{
+	ParsedLex  *res = list->head;
+
+	if (list->head)
+		list->head = list->head->next;
+
+	if (list->head == NULL)
+		list->tail = NULL;
+
+	return res;
+}
+
+/*
+ * Remove all ParsedLex from the list
+ */
+static void
+LPLClear(ListParsedLex *list)
+{
+	ParsedLex  *tmp,
+			   *ptr = list->head;
+
+	while (ptr)
+	{
+		tmp = ptr->next;
+		pfree(ptr);
+		ptr = tmp;
+	}
+
+	list->head = list->tail = NULL;
+}
+
+/*-------------------
+ * LexizeData manipulation functions
+ *-------------------
+ */
+
+/*
+ * Initialize empty LexizeData object
+ */
 static void
 LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
 {
 	ld->cfg = cfg;
-	ld->curDictId = InvalidOid;
-	ld->posDict = 0;
-	ld->towork.head = ld->towork.tail = ld->curSub = NULL;
+	ld->skipDictionary = InvalidOid;
+	ld->towork.head = ld->towork.tail = NULL;
 	ld->waste.head = ld->waste.tail = NULL;
-	ld->lastRes = NULL;
-	ld->tmpRes = NULL;
+	ld->dslist.listLength = 0;
+	ld->dslist.states = NULL;
+	ld->buffer.size = 0;
+	ld->buffer.data = NULL;
+	ld->delayedResults.lexemes = NULL;
+	ld->delayedResults.accepted = NULL;
+}
+
+/*
+ * Add a token to the processing queue
+ */
+static void
+LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
+{
+	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+
+	newpl->type = type;
+	newpl->lemm = lemm;
+	newpl->lenlemm = lenlemm;
+	newpl->relatedRule = NULL;
+	LPLAddTail(&ld->towork, newpl);
+}
+
+/*
+ * Remove head of the processing queue
+ */
+static void
+RemoveHead(LexizeData *ld)
+{
+	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+}
+
+/*
+ * Set token corresponded to current lexeme
+ */
+static void
+setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+{
+	if (correspondLexem)
+		*correspondLexem = ld->waste.head;
+	else
+		LPLClear(&ld->waste);
+
+	ld->waste.head = ld->waste.tail = NULL;
+}
+
+/*-------------------
+ * DictState manipulation functions
+ *-------------------
+ */
+
+/*
+ * Get a state of dictionary based on its oid
+ */
+static DictState *
+DictStateListGet(DictStateList *list, Oid dictId)
+{
+	int			i;
+	DictState  *result = NULL;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			result = &list->states[i];
+
+	return result;
+}
+
+/*
+ * Remove a state of dictionary based on its oid
+ */
+static void
+DictStateListRemove(DictStateList *list, Oid dictId)
+{
+	int			i;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			break;
+
+	if (i != list->listLength)
+	{
+		memcpy(list->states + i, list->states + i + 1, sizeof(DictState) * (list->listLength - i - 1));
+		list->listLength--;
+		if (list->listLength == 0)
+			list->states = NULL;
+		else
+			list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	}
+}
+
+/*
+ * Insert a state of dictionary with specified oid
+ */
+static DictState *
+DictStateListAdd(DictStateList *list, DictState *state)
+{
+	DictStateListRemove(list, state->relatedDictionary);
+
+	list->listLength++;
+	if (list->states)
+		list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	else
+		list->states = palloc0(sizeof(DictState) * list->listLength);
+
+	memcpy(list->states + list->listLength - 1, state, sizeof(DictState));
+
+	return list->states + list->listLength - 1;
+}
+
+/*
+ * Remove states of all dictionaries
+ */
+static void
+DictStateListClear(DictStateList *list)
+{
+	list->listLength = 0;
+	if (list->states)
+		pfree(list->states);
+	list->states = NULL;
+}
+
+/*-------------------
+ * LexemesBuffer manipulation functions
+ *-------------------
+ */
+
+/*
+ * Check if there is a saved lexeme generated by specified TSMapElement
+ */
+static bool
+LexemesBufferContains(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			return true;
+
+	return false;
+}
+
+/*
+ * Get a saved lexeme generated by specified TSMapElement
+ */
+static TSLexeme *
+LexemesBufferGet(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+	TSLexeme   *result = NULL;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			result = buffer->data[i].data;
+
+	return result;
+}
+
+/*
+ * Remove a saved lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferRemove(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			break;
+
+	if (i != buffer->size)
+	{
+		memcpy(buffer->data + i, buffer->data + i + 1, sizeof(LexemesBufferEntry) * (buffer->size - i - 1));
+		buffer->size--;
+		if (buffer->size == 0)
+			buffer->data = NULL;
+		else
+			buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	}
+}
+
+/*
+ * Same a lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferAdd(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token, TSLexeme *data)
+{
+	LexemesBufferRemove(buffer, key, token);
+
+	buffer->size++;
+	if (buffer->data)
+		buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	else
+		buffer->data = palloc0(sizeof(LexemesBufferEntry) * buffer->size);
+
+	buffer->data[buffer->size - 1].token = token;
+	buffer->data[buffer->size - 1].key = key;
+	buffer->data[buffer->size - 1].data = data;
+}
+
+/*
+ * Remove all lexemes saved in a buffer
+ */
+static void
+LexemesBufferClear(LexemesBuffer *buffer)
+{
+	int			i;
+	bool	   *skipEntry = palloc0(sizeof(bool) * buffer->size);
+
+	for (i = 0; i < buffer->size; i++)
+	{
+		if (buffer->data[i].data != NULL && !skipEntry[i])
+		{
+			int			j;
+
+			for (j = 0; j < buffer->size; j++)
+				if (buffer->data[i].data == buffer->data[j].data)
+					skipEntry[j] = true;
+
+			pfree(buffer->data[i].data);
+		}
+	}
+
+	buffer->size = 0;
+	if (buffer->data)
+		pfree(buffer->data);
+	buffer->data = NULL;
+}
+
+/*-------------------
+ * TSLexeme util functions
+ *-------------------
+ */
+
+/*
+ * Get size of TSLexeme except empty-lexeme
+ */
+static int
+TSLexemeGetSize(TSLexeme *lex)
+{
+	int			result = 0;
+	TSLexeme   *ptr = lex;
+
+	while (ptr && ptr->lexeme)
+	{
+		result++;
+		ptr++;
+	}
+
+	return result;
+}
+
+/*
+ * Remove repeated lexemes. Also remove copies of whole nvariant groups.
+ */
+static TSLexeme *
+TSLexemeRemoveDuplications(TSLexeme *lexeme)
+{
+	TSLexeme   *res;
+	int			curLexIndex;
+	int			i;
+	int			lexemeSize = TSLexemeGetSize(lexeme);
+	int			shouldCopyCount = lexemeSize;
+	bool	   *shouldCopy;
+
+	if (lexeme == NULL)
+		return NULL;
+
+	shouldCopy = palloc(sizeof(bool) * lexemeSize);
+	memset(shouldCopy, true, sizeof(bool) * lexemeSize);
+
+	for (curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		for (i = curLexIndex + 1; i < lexemeSize; i++)
+		{
+			if (!shouldCopy[i])
+				continue;
+
+			if (strcmp(lexeme[curLexIndex].lexeme, lexeme[i].lexeme) == 0)
+			{
+				if (lexeme[curLexIndex].nvariant == lexeme[i].nvariant)
+				{
+					shouldCopy[i] = false;
+					shouldCopyCount--;
+					continue;
+				}
+				else
+				{
+					/*
+					 * Check for same set of lexemes in another nvariant
+					 * series
+					 */
+					int			nvariantCountL = 0;
+					int			nvariantCountR = 0;
+					int			nvariantOverlap = 1;
+					int			j;
+
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[curLexIndex].nvariant == lexeme[j].nvariant)
+							nvariantCountL++;
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[i].nvariant == lexeme[j].nvariant)
+							nvariantCountR++;
+
+					if (nvariantCountL != nvariantCountR)
+						continue;
+
+					for (j = 1; j < nvariantCountR; j++)
+					{
+						if (strcmp(lexeme[curLexIndex + j].lexeme, lexeme[i + j].lexeme) == 0
+							&& lexeme[curLexIndex + j].nvariant == lexeme[i + j].nvariant)
+							nvariantOverlap++;
+					}
+
+					if (nvariantOverlap != nvariantCountR)
+						continue;
+
+					for (j = 0; j < nvariantCountR; j++)
+						shouldCopy[i + j] = false;
+				}
+			}
+		}
+	}
+
+	res = palloc0(sizeof(TSLexeme) * (shouldCopyCount + 1));
+
+	for (i = 0, curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		if (shouldCopy[curLexIndex])
+		{
+			memcpy(res + i, lexeme + curLexIndex, sizeof(TSLexeme));
+			i++;
+		}
+	}
+
+	pfree(shouldCopy);
+	pfree(lexeme);
+	return res;
+}
+
+/*
+ * Combine two lexeme lists with respect to positions
+ */
+static TSLexeme *
+TSLexemeMergePositions(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+
+	if (left != NULL || right != NULL)
+	{
+		int			left_i = 0;
+		int			right_i = 0;
+		int			left_max_nvariant = 0;
+		int			i;
+		int			left_size = TSLexemeGetSize(left);
+		int			right_size = TSLexemeGetSize(right);
+
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		for (i = 0; i < right_size; i++)
+			right[i].nvariant += left_max_nvariant;
+		if (right && right[0].flags & TSL_ADDPOS)
+			right[0].flags &= ~TSL_ADDPOS;
+
+		i = 0;
+		while (i < left_size + right_size)
+		{
+			if (left_i < left_size)
+			{
+				do
+				{
+					result[i++] = left[left_i++];
+				} while (left && left[left_i].lexeme && (left[left_i].flags & TSL_ADDPOS) == 0);
+			}
+
+			if (right_i < right_size)
+			{
+				do
+				{
+					result[i++] = right[right_i++];
+				} while (right && right[right_i].lexeme && (right[right_i].flags & TSL_ADDPOS) == 0);
+			}
+		}
+	}
+	return result;
+}
+
+/*
+ * Split lexemes generated by regular dictionaries and multi-input dictionaries
+ * and combine them with respect to positions
+ */
+static TSLexeme *
+TSLexemeFilterMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *result;
+	TSLexeme   *ptr = lexemes;
+	int			multi_lexemes = 0;
+
+	while (ptr && ptr->lexeme)
+	{
+		if (ptr->flags & TSL_MULTI)
+			multi_lexemes++;
+		ptr++;
+	}
+
+	if (multi_lexemes > 0)
+	{
+		TSLexeme   *lexemes_multi = palloc0(sizeof(TSLexeme) * (multi_lexemes + 1));
+		TSLexeme   *lexemes_rest = palloc0(sizeof(TSLexeme) * (TSLexemeGetSize(lexemes) - multi_lexemes + 1));
+		int			rest_i = 0;
+		int			multi_i = 0;
+
+		ptr = lexemes;
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr->flags & TSL_MULTI)
+				lexemes_multi[multi_i++] = *ptr;
+			else
+				lexemes_rest[rest_i++] = *ptr;
+
+			ptr++;
+		}
+		result = TSLexemeMergePositions(lexemes_rest, lexemes_multi);
+	}
+	else
+	{
+		result = TSLexemeMergePositions(lexemes, NULL);
+	}
+
+	return result;
+}
+
+/*
+ * Mark lexemes as generated by multi-input (thesaurus-like) dictionary
+ */
+static void
+TSLexemeMarkMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *ptr = lexemes;
+
+	while (ptr && ptr->lexeme)
+	{
+		ptr->flags |= TSL_MULTI;
+		ptr++;
+	}
+}
+
+/*-------------------
+ * Lexemes set operations
+ *-------------------
+ */
+
+/*
+ * Combine left and right lexeme lists into one.
+ * If append is true, right lexemes added after last left lexeme with TSL_ADDPOS flag
+ */
+static TSLexeme *
+TSLexemeUnionOpt(TSLexeme *left, TSLexeme *right, bool append)
+{
+	TSLexeme   *result;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+	int			left_max_nvariant = 0;
+	int			i;
+
+	if (left == NULL && right == NULL)
+	{
+		result = NULL;
+	}
+	else
+	{
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		if (left_size > 0)
+			memcpy(result, left, sizeof(TSLexeme) * left_size);
+		if (right_size > 0)
+			memcpy(result + left_size, right, sizeof(TSLexeme) * right_size);
+		if (append && left_size > 0 && right_size > 0)
+			result[left_size].flags |= TSL_ADDPOS;
+
+		for (i = left_size; i < left_size + right_size; i++)
+			result[i].nvariant += left_max_nvariant;
+	}
+
+	return result;
+}
+
+/*
+ * Combine left and right lexeme lists into one
+ */
+static TSLexeme *
+TSLexemeUnion(TSLexeme *left, TSLexeme *right)
+{
+	return TSLexemeUnionOpt(left, right, false);
+}
+
+/*
+ * Remove common lexemes and return only which is stored in left list
+ */
+static TSLexeme *
+TSLexemeExcept(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (!found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*
+ * Keep only common lexemes
+ */
+static TSLexeme *
+TSLexemeIntersect(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*-------------------
+ * Result storage functions
+ *-------------------
+ */
+
+/*
+ * Add a lexeme to the result storage
+ */
+static void
+ResultStorageAdd(ResultStorage *storage, ParsedLex *token, TSLexeme *lexs)
+{
+	TSLexeme   *oldLexs = storage->lexemes;
+
+	storage->lexemes = TSLexemeUnionOpt(storage->lexemes, lexs, true);
+	if (oldLexs)
+		pfree(oldLexs);
+}
+
+/*
+ * Move all saved lexemes to accepted list
+ */
+static void
+ResultStorageMoveToAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+	{
+		TSLexeme   *prevAccepted = storage->accepted;
+
+		storage->accepted = TSLexemeUnionOpt(storage->accepted, storage->lexemes, true);
+		if (prevAccepted)
+			pfree(prevAccepted);
+		if (storage->lexemes)
+			pfree(storage->lexemes);
+	}
+	else
+	{
+		storage->accepted = storage->lexemes;
+	}
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all non-accepted lexemes
+ */
+static void
+ResultStorageClearLexemes(ResultStorage *storage)
+{
+	if (storage->lexemes)
+		pfree(storage->lexemes);
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all accepted lexemes
+ */
+static void
+ResultStorageClearAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+		pfree(storage->accepted);
+	storage->accepted = NULL;
+}
+
+/*-------------------
+ * Condition and command execution
+ *-------------------
+ */
+
+/*
+ * Process a token by the dictionary
+ */
+static TSLexeme *
+LexizeExecDictionary(LexizeData *ld, ParsedLex *token, TSMapElement *dictionary)
+{
+	TSLexeme   *res;
+	TSDictionaryCacheEntry *dict;
+	DictSubState subState;
+	Oid			dictId = dictionary->value.objectDictionary;
+
+	if (ld->skipDictionary == dictId)
+		return NULL;
+
+	if (LexemesBufferContains(&ld->buffer, dictionary, token))
+		res = LexemesBufferGet(&ld->buffer, dictionary, token);
+	else
+	{
+		char	   *curValLemm = token->lemm;
+		int			curValLenLemm = token->lenlemm;
+		DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+		dict = lookup_ts_dictionary_cache(dictId);
+
+		if (state)
+		{
+			subState = state->subState;
+			state->processed = true;
+		}
+		else
+		{
+			subState.isend = subState.getnext = false;
+			subState.private_state = NULL;
+		}
+
+		res = (TSLexeme *) DatumGetPointer(FunctionCall4(&(dict->lexize),
+														 PointerGetDatum(dict->dictData),
+														 PointerGetDatum(curValLemm),
+														 Int32GetDatum(curValLenLemm),
+														 PointerGetDatum(&subState)
+														 ));
+
+		if (subState.getnext)
+		{
+			/*
+			 * Dictionary wants next word, so store current context and state
+			 * in the DictStateList
+			 */
+			if (state == NULL)
+			{
+				state = palloc0(sizeof(DictState));
+				state->processed = true;
+				state->relatedDictionary = dictId;
+				state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				state->acceptedTokens.head = state->acceptedTokens.tail = NULL;
+				state->tmpResult = NULL;
+
+				/*
+				 * Add state to the list and update pointer in order to work
+				 * with copy from the list
+				 */
+				state = DictStateListAdd(&ld->dslist, state);
+			}
+
+			state->subState = subState;
+			state->storeToAccepted = res != NULL;
+
+			if (res)
+			{
+				if (state->intermediateTokens.head != NULL)
+				{
+					ParsedLex  *ptr = state->intermediateTokens.head;
+
+					while (ptr)
+					{
+						LPLAddTailCopy(&state->acceptedTokens, ptr);
+						ptr = ptr->next;
+					}
+					state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				}
+
+				if (state->tmpResult)
+					pfree(state->tmpResult);
+				TSLexemeMarkMulti(res);
+				state->tmpResult = res;
+				res = NULL;
+			}
+		}
+		else if (state != NULL)
+		{
+			if (res)
+			{
+				if (state)
+					TSLexemeMarkMulti(res);
+				DictStateListRemove(&ld->dslist, dictId);
+			}
+			else
+			{
+				/*
+				 * Trigger post-processing in order to check tmpResult and
+				 * restart processing (see LexizeExec function)
+				 */
+				state->processed = false;
+			}
+		}
+		LexemesBufferAdd(&ld->buffer, dictionary, token, res);
+	}
+
+	return res;
+}
+
+/*
+ * Check is dictionary waits for more tokens or not
+ */
+static bool
+LexizeExecDictionaryWaitNext(LexizeData *ld, Oid dictId)
+{
+	DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+	if (state)
+		return state->subState.getnext;
+	else
+		return false;
+}
+
+/*
+ * Check is dictionary result for current token is NULL or not.
+ * It dictionary waits for more lexemes, the result is interpreted as not null.
+ */
+static bool
+LexizeExecIsNull(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	bool		result = false;
+
+	if (config->type == TSMAP_EXPRESSION)
+	{
+		TSMapExpression *expression = config->value.objectExpression;
+
+		result = LexizeExecIsNull(ld, token, expression->left) || LexizeExecIsNull(ld, token, expression->right);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		Oid			dictOid = config->value.objectDictionary;
+		TSLexeme   *lexemes = LexizeExecDictionary(ld, token, config);
+
+		if (lexemes)
+			result = false;
+		else
+			result = !LexizeExecDictionaryWaitNext(ld, dictOid);
+	}
+	return result;
+}
+
+/*
+ * Execute a MAP operator
+ */
+static TSLexeme *
+TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression)
+{
+	TSLexeme   *left_res;
+	TSLexeme   *result = NULL;
+	int			left_size;
+	int			i;
+
+	left_res = LexizeExecTSElement(ld, token, expression->left);
+	left_size = TSLexemeGetSize(left_res);
+
+	if (left_res == NULL)
+		result = LexizeExecTSElement(ld, token, expression->right);
+	else
+	{
+		for (i = 0; i < left_size; i++)
+		{
+			TSLexeme   *tmp_res = NULL;
+			TSLexeme   *prev_res;
+			ParsedLex	tmp_token;
+
+			tmp_token.lemm = left_res[i].lexeme;
+			tmp_token.lenlemm = strlen(left_res[i].lexeme);
+			tmp_token.type = token->type;
+			tmp_token.next = NULL;
+
+			tmp_res = LexizeExecTSElement(ld, &tmp_token, expression->right);
+			prev_res = result;
+			result = TSLexemeUnion(prev_res, tmp_res);
+			if (prev_res)
+				pfree(prev_res);
+		}
+	}
+
+	return result;
 }
 
-static void
-LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
+/*
+ * Execute a TSMapElement
+ * Common point of all possible types of TSMapElement
+ */
+static TSLexeme *
+LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config)
 {
-	if (list->tail)
+	TSLexeme   *result = NULL;
+
+	if (LexemesBufferContains(&ld->buffer, config, token))
+		result = LexemesBufferGet(&ld->buffer, config, token);
+	else if (config->type == TSMAP_DICTIONARY)
 	{
-		list->tail->next = newpl;
-		list->tail = newpl;
+		token->relatedRule = config;
+		result = LexizeExecDictionary(ld, token, config);
 	}
-	else
-		list->head = list->tail = newpl;
-	newpl->next = NULL;
-}
+	else if (config->type == TSMAP_CASE)
+	{
+		TSMapCase  *caseObject = config->value.objectCase;
+		bool		conditionIsNull = LexizeExecIsNull(ld, token, caseObject->condition);
 
-static ParsedLex *
-LPLRemoveHead(ListParsedLex *list)
-{
-	ParsedLex  *res = list->head;
+		if ((!conditionIsNull && caseObject->match) || (conditionIsNull && !caseObject->match))
+		{
+			token->relatedRule = config;
 
-	if (list->head)
-		list->head = list->head->next;
+			if (caseObject->command->type == TSMAP_KEEP)
+				result = LexizeExecTSElement(ld, token, caseObject->condition);
+			else
+				result = LexizeExecTSElement(ld, token, caseObject->command);
+		}
+		else if (caseObject->elsebranch)
+			result = LexizeExecTSElement(ld, token, caseObject->elsebranch);
+	}
+	else if (config->type == TSMAP_EXPRESSION)
+	{
+		TSLexeme   *resLeft = NULL;
+		TSLexeme   *resRight = NULL;
+		TSMapExpression *expression = config->value.objectExpression;
 
-	if (list->head == NULL)
-		list->tail = NULL;
+		if (expression->operator != TSMAP_OP_MAP)
+		{
+			resLeft = LexizeExecTSElement(ld, token, expression->left);
+			resRight = LexizeExecTSElement(ld, token, expression->right);
+		}
 
-	return res;
-}
+		switch (expression->operator)
+		{
+			case TSMAP_OP_UNION:
+				result = TSLexemeUnion(resLeft, resRight);
+				break;
+			case TSMAP_OP_EXCEPT:
+				result = TSLexemeExcept(resLeft, resRight);
+				break;
+			case TSMAP_OP_INTERSECT:
+				result = TSLexemeIntersect(resLeft, resRight);
+				break;
+			case TSMAP_OP_MAP:
+				result = TSLexemeMap(ld, token, expression);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains invalid expression operator.")));
+				break;
+		}
+		if (resLeft && expression->left->type != TSMAP_DICTIONARY)
+			pfree(resLeft);
+		if (resRight && expression->right->type != TSMAP_DICTIONARY)
+			pfree(resRight);
+	}
 
-static void
-LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
-{
-	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+	if (!LexemesBufferContains(&ld->buffer, config, token))
+		LexemesBufferAdd(&ld->buffer, config, token, result);
 
-	newpl->type = type;
-	newpl->lemm = lemm;
-	newpl->lenlemm = lenlemm;
-	LPLAddTail(&ld->towork, newpl);
-	ld->curSub = ld->towork.tail;
+	return result;
 }
 
-static void
-RemoveHead(LexizeData *ld)
+/*-------------------
+ * LexizeExec and helpers functions
+ *-------------------
+ */
+
+/*
+ * Processing of EOF-like token.
+ * Return all temporary results if any are saved.
+ */
+static TSLexeme *
+LexizeExecFinishProcessing(LexizeData *ld)
 {
-	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+	int			i;
+	TSLexeme   *res = NULL;
+
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		TSLexeme   *last_res = res;
+
+		res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+		if (last_res)
+			pfree(last_res);
+	}
 
-	ld->posDict = 0;
+	return res;
 }
 
-static void
-setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+/*
+ * Get last accepted result of the phrase-dictionary
+ */
+static TSLexeme *
+LexizeExecGetPreviousResults(LexizeData *ld)
 {
-	if (correspondLexem)
-	{
-		*correspondLexem = ld->waste.head;
-	}
-	else
-	{
-		ParsedLex  *tmp,
-				   *ptr = ld->waste.head;
+	int			i;
+	TSLexeme   *res = NULL;
 
-		while (ptr)
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		if (!ld->dslist.states[i].processed)
 		{
-			tmp = ptr->next;
-			pfree(ptr);
-			ptr = tmp;
+			TSLexeme   *last_res = res;
+
+			res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+			if (last_res)
+				pfree(last_res);
 		}
 	}
-	ld->waste.head = ld->waste.tail = NULL;
+
+	return res;
 }
 
+/*
+ * Remove all dictionary states which wasn't used for current token
+ */
 static void
-moveToWaste(LexizeData *ld, ParsedLex *stop)
+LexizeExecClearDictStates(LexizeData *ld)
 {
-	bool		go = true;
+	int			i;
 
-	while (ld->towork.head && go)
+	for (i = 0; i < ld->dslist.listLength; i++)
 	{
-		if (ld->towork.head == stop)
+		if (!ld->dslist.states[i].processed)
 		{
-			ld->curSub = stop->next;
-			go = false;
+			DictStateListRemove(&ld->dslist, ld->dslist.states[i].relatedDictionary);
+			i = 0;
 		}
-		RemoveHead(ld);
 	}
 }
 
-static void
-setNewTmpRes(LexizeData *ld, ParsedLex *lex, TSLexeme *res)
+/*
+ * Check if there are any dictionaries that didn't processed current token
+ */
+static bool
+LexizeExecNotProcessedDictStates(LexizeData *ld)
 {
-	if (ld->tmpRes)
-	{
-		TSLexeme   *ptr;
+	int			i;
 
-		for (ptr = ld->tmpRes; ptr->lexeme; ptr++)
-			pfree(ptr->lexeme);
-		pfree(ld->tmpRes);
-	}
-	ld->tmpRes = res;
-	ld->lastRes = lex;
+	for (i = 0; i < ld->dslist.listLength; i++)
+		if (!ld->dslist.states[i].processed)
+			return true;
+
+	return false;
 }
 
+/*
+ * Do a lexize processing for a towork queue in LexizeData
+ */
 static TSLexeme *
 LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 {
+	ParsedLex  *token;
+	TSMapElement *config;
+	TSLexeme   *res = NULL;
+	TSLexeme   *prevIterationResult = NULL;
+	bool		removeHead = false;
+	bool		resetSkipDictionary = false;
+	bool		accepted = false;
 	int			i;
-	ListDictionary *map;
-	TSDictionaryCacheEntry *dict;
-	TSLexeme   *res;
 
-	if (ld->curDictId == InvalidOid)
+	for (i = 0; i < ld->dslist.listLength; i++)
+		ld->dslist.states[i].processed = false;
+	if (ld->skipDictionary != InvalidOid)
+		resetSkipDictionary = true;
+
+	token = ld->towork.head;
+	if (token == NULL)
 	{
-		/*
-		 * usual mode: dictionary wants only one word, but we should keep in
-		 * mind that we should go through all stack
-		 */
+		setCorrLex(ld, correspondLexem);
+		return NULL;
+	}
 
-		while (ld->towork.head)
+	if (token->type >= ld->cfg->lenmap)
+	{
+		removeHead = true;
+	}
+	else
+	{
+		config = ld->cfg->map[token->type];
+		if (config != NULL)
 		{
-			ParsedLex  *curVal = ld->towork.head;
-			char	   *curValLemm = curVal->lemm;
-			int			curValLenLemm = curVal->lenlemm;
+			res = LexizeExecTSElement(ld, token, config);
+			prevIterationResult = LexizeExecGetPreviousResults(ld);
+			removeHead = prevIterationResult == NULL;
+		}
+		else
+		{
+			removeHead = true;
+			if (token->type == 0)	/* Processing EOF-like token */
+			{
+				res = LexizeExecFinishProcessing(ld);
+				prevIterationResult = NULL;
+			}
+		}
 
-			map = ld->cfg->map + curVal->type;
+		if (LexizeExecNotProcessedDictStates(ld) && (token->type == 0 || config != NULL))	/* Rollback processing */
+		{
+			int			i;
+			ListParsedLex *intermediateTokens = NULL;
+			ListParsedLex *acceptedTokens = NULL;
 
-			if (curVal->type == 0 || curVal->type >= ld->cfg->lenmap || map->len == 0)
+			for (i = 0; i < ld->dslist.listLength; i++)
 			{
-				/* skip this type of lexeme */
-				RemoveHead(ld);
-				continue;
+				if (!ld->dslist.states[i].processed)
+				{
+					intermediateTokens = &ld->dslist.states[i].intermediateTokens;
+					acceptedTokens = &ld->dslist.states[i].acceptedTokens;
+					if (prevIterationResult == NULL)
+						ld->skipDictionary = ld->dslist.states[i].relatedDictionary;
+				}
 			}
 
-			for (i = ld->posDict; i < map->len; i++)
+			if (intermediateTokens && intermediateTokens->head)
 			{
-				dict = lookup_ts_dictionary_cache(map->dictIds[i]);
-
-				ld->dictState.isend = ld->dictState.getnext = false;
-				ld->dictState.private_state = NULL;
-				res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-																 &(dict->lexize),
-																 PointerGetDatum(dict->dictData),
-																 PointerGetDatum(curValLemm),
-																 Int32GetDatum(curValLenLemm),
-																 PointerGetDatum(&ld->dictState)
-																 ));
-
-				if (ld->dictState.getnext)
+				ParsedLex  *head = ld->towork.head;
+
+				ld->towork.head = intermediateTokens->head;
+				intermediateTokens->tail->next = head;
+				head->next = NULL;
+				ld->towork.tail = head;
+				removeHead = false;
+				LPLClear(&ld->waste);
+				if (acceptedTokens && acceptedTokens->head)
 				{
-					/*
-					 * dictionary wants next word, so setup and store current
-					 * position and go to multiword mode
-					 */
-
-					ld->curDictId = DatumGetObjectId(map->dictIds[i]);
-					ld->posDict = i + 1;
-					ld->curSub = curVal->next;
-					if (res)
-						setNewTmpRes(ld, curVal, res);
-					return LexizeExec(ld, correspondLexem);
+					ld->waste.head = acceptedTokens->head;
+					ld->waste.tail = acceptedTokens->tail;
 				}
+			}
+			ResultStorageClearLexemes(&ld->delayedResults);
+			if (config != NULL)
+				res = NULL;
+		}
 
-				if (!res)		/* dictionary doesn't know this lexeme */
-					continue;
+		if (config != NULL)
+			LexizeExecClearDictStates(ld);
+		else if (token->type == 0)
+			DictStateListClear(&ld->dslist);
+	}
 
-				if (res->flags & TSL_FILTER)
-				{
-					curValLemm = res->lexeme;
-					curValLenLemm = strlen(res->lexeme);
-					continue;
-				}
+	if (prevIterationResult)
+		res = prevIterationResult;
+	else
+	{
+		int			i;
 
-				RemoveHead(ld);
-				setCorrLex(ld, correspondLexem);
-				return res;
+		for (i = 0; i < ld->dslist.listLength; i++)
+		{
+			if (ld->dslist.states[i].storeToAccepted)
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].acceptedTokens, token);
+				accepted = true;
+				ld->dslist.states[i].storeToAccepted = false;
+			}
+			else
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].intermediateTokens, token);
 			}
-
-			RemoveHead(ld);
 		}
 	}
-	else
-	{							/* curDictId is valid */
-		dict = lookup_ts_dictionary_cache(ld->curDictId);
 
+	if (removeHead)
+		RemoveHead(ld);
+
+	if (ld->dslist.listLength > 0)
+	{
 		/*
-		 * Dictionary ld->curDictId asks  us about following words
+		 * There is at least one thesaurus dictionary in the middle of
+		 * processing. Delay return of the result to avoid wrong lexemes in
+		 * case of thesaurus phrase rejection.
 		 */
+		ResultStorageAdd(&ld->delayedResults, token, res);
+		if (accepted)
+			ResultStorageMoveToAccepted(&ld->delayedResults);
 
-		while (ld->curSub)
+		/*
+		 * Current value of res should not be cleared, because it is stored in
+		 * LexemesBuffer
+		 */
+		res = NULL;
+	}
+	else
+	{
+		if (ld->towork.head == NULL)
 		{
-			ParsedLex  *curVal = ld->curSub;
-
-			map = ld->cfg->map + curVal->type;
-
-			if (curVal->type != 0)
-			{
-				bool		dictExists = false;
-
-				if (curVal->type >= ld->cfg->lenmap || map->len == 0)
-				{
-					/* skip this type of lexeme */
-					ld->curSub = curVal->next;
-					continue;
-				}
-
-				/*
-				 * We should be sure that current type of lexeme is recognized
-				 * by our dictionary: we just check is it exist in list of
-				 * dictionaries ?
-				 */
-				for (i = 0; i < map->len && !dictExists; i++)
-					if (ld->curDictId == DatumGetObjectId(map->dictIds[i]))
-						dictExists = true;
+			TSLexeme   *oldAccepted = ld->delayedResults.accepted;
 
-				if (!dictExists)
-				{
-					/*
-					 * Dictionary can't work with current tpe of lexeme,
-					 * return to basic mode and redo all stored lexemes
-					 */
-					ld->curDictId = InvalidOid;
-					return LexizeExec(ld, correspondLexem);
-				}
-			}
+			ld->delayedResults.accepted = TSLexemeUnionOpt(ld->delayedResults.accepted, ld->delayedResults.lexemes, true);
+			if (oldAccepted)
+				pfree(oldAccepted);
+		}
 
-			ld->dictState.isend = (curVal->type == 0) ? true : false;
-			ld->dictState.getnext = false;
+		/*
+		 * Add accepted delayed results to the output of the parsing. All
+		 * lexemes returned during thesaurus pharse processing should be
+		 * returned simultaniously, since all phrase tokens are processed as
+		 * one.
+		 */
+		if (ld->delayedResults.accepted != NULL)
+		{
+			/*
+			 * Previous value of res should not be cleared, because it is
+			 * stored in LexemesBuffer
+			 */
+			res = TSLexemeUnionOpt(ld->delayedResults.accepted, res, prevIterationResult == NULL);
 
-			res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-															 &(dict->lexize),
-															 PointerGetDatum(dict->dictData),
-															 PointerGetDatum(curVal->lemm),
-															 Int32GetDatum(curVal->lenlemm),
-															 PointerGetDatum(&ld->dictState)
-															 ));
+			ResultStorageClearLexemes(&ld->delayedResults);
+			ResultStorageClearAccepted(&ld->delayedResults);
+		}
+		setCorrLex(ld, correspondLexem);
+	}
 
-			if (ld->dictState.getnext)
-			{
-				/* Dictionary wants one more */
-				ld->curSub = curVal->next;
-				if (res)
-					setNewTmpRes(ld, curVal, res);
-				continue;
-			}
+	if (resetSkipDictionary)
+		ld->skipDictionary = InvalidOid;
 
-			if (res || ld->tmpRes)
-			{
-				/*
-				 * Dictionary normalizes lexemes, so we remove from stack all
-				 * used lexemes, return to basic mode and redo end of stack
-				 * (if it exists)
-				 */
-				if (res)
-				{
-					moveToWaste(ld, ld->curSub);
-				}
-				else
-				{
-					res = ld->tmpRes;
-					moveToWaste(ld, ld->lastRes);
-				}
+	res = TSLexemeFilterMulti(res);
+	if (res)
+		res = TSLexemeRemoveDuplications(res);
 
-				/* reset to initial state */
-				ld->curDictId = InvalidOid;
-				ld->posDict = 0;
-				ld->lastRes = NULL;
-				ld->tmpRes = NULL;
-				setCorrLex(ld, correspondLexem);
-				return res;
-			}
+	/*
+	 * Copy result since it may be stored in LexemesBuffere and removed at the
+	 * next step.
+	 */
+	if (res)
+	{
+		TSLexeme   *oldRes = res;
+		int			resSize = TSLexemeGetSize(res);
 
-			/*
-			 * Dict don't want next lexem and didn't recognize anything, redo
-			 * from ld->towork.head
-			 */
-			ld->curDictId = InvalidOid;
-			return LexizeExec(ld, correspondLexem);
-		}
+		res = palloc0(sizeof(TSLexeme) * (resSize + 1));
+		memcpy(res, oldRes, sizeof(TSLexeme) * resSize);
 	}
 
-	setCorrLex(ld, correspondLexem);
-	return NULL;
+	LexemesBufferClear(&ld->buffer);
+	return res;
 }
 
+/*-------------------
+ * ts_parse API functions
+ *-------------------
+ */
+
 /*
  * Parse string and lexize words.
  *
@@ -357,7 +1415,7 @@ LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 void
 parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
@@ -375,36 +1433,42 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		while ((norms = LexizeExec(&ldata, NULL)) != NULL)
 		{
-			TSLexeme   *ptr = norms;
+			TSLexeme   *ptr;
+
+			ptr = norms;
 
 			prs->pos++;			/* set pos */
 
@@ -429,14 +1493,209 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 			}
 			pfree(norms);
 		}
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
 
+/*-------------------
+ * ts_debug and helper functions
+ *-------------------
+ */
+
+/*
+ * Initialize SRF context and text parser for ts_debug execution.
+ */
+static void
+ts_debug_init(Oid cfgId, text *inputText, FunctionCallInfo fcinfo)
+{
+	TupleDesc	tupdesc;
+	char	   *buf;
+	int			buflen;
+	FuncCallContext *funcctx;
+	MemoryContext oldcontext;
+	TSDebugContext *context;
+
+	funcctx = SRF_FIRSTCALL_INIT();
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+	buf = text_to_cstring(inputText);
+	buflen = strlen(buf);
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("function returning record called in context "
+						"that cannot accept type record")));
+
+	funcctx->user_fctx = palloc0(sizeof(TSDebugContext));
+	funcctx->attinmeta = TupleDescGetAttInMetadata(tupdesc);
+
+	context = funcctx->user_fctx;
+	context->cfg = lookup_ts_config_cache(cfgId);
+	context->prsobj = lookup_ts_parser_cache(context->cfg->prsId);
+
+	context->tokenTypes = (LexDescr *) DatumGetPointer(OidFunctionCall1(context->prsobj->lextypeOid,
+																		(Datum) 0));
+
+	context->prsdata = (void *) DatumGetPointer(FunctionCall2(&context->prsobj->prsstart,
+															  PointerGetDatum(buf),
+															  Int32GetDatum(buflen)));
+	LexizeInit(&context->ldata, context->cfg);
+	context->tokentype = 1;
+
+	MemoryContextSwitchTo(oldcontext);
+}
+
+/*
+ * Get one token from input text and add it to processing queue.
+ */
+static void
+ts_debug_get_token(FuncCallContext *funcctx)
+{
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+	int			lenlemm;
+	char	   *lemm = NULL;
+
+	context = funcctx->user_fctx;
+
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+	context->tokentype = DatumGetInt32(FunctionCall3(&(context->prsobj->prstoken),
+													 PointerGetDatum(context->prsdata),
+													 PointerGetDatum(&lemm),
+													 PointerGetDatum(&lenlemm)));
+
+	if (context->tokentype > 0 && lenlemm >= MAXSTRLEN)
+	{
+#ifdef IGNORE_LONGLEXEME
+		ereport(NOTICE,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#else
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#endif
+	}
+
+	LexizeAddLemm(&context->ldata, context->tokentype, lemm, lenlemm);
+	MemoryContextSwitchTo(oldcontext);
+}
+
 /*
+ * Parse text and print debug information, such as token type, dictionary map
+ * configuration, selected command and lexemes for each token.
+ * Arguments: regconfiguration(Oid) cfgId, text *inputText
+ */
+Datum
+ts_debug(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		Oid			cfgId = PG_GETARG_OID(0);
+		text	   *inputText = PG_GETARG_TEXT_P(1);
+
+		ts_debug_init(cfgId, inputText, fcinfo);
+	}
+
+	funcctx = SRF_PERCALL_SETUP();
+	context = funcctx->user_fctx;
+
+	while (context->tokentype > 0 && context->leftTokens == NULL)
+	{
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+		ts_debug_get_token(funcctx);
+
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	while (context->leftTokens == NULL && context->ldata.towork.head != NULL)
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+	if (context->leftTokens && context->leftTokens && context->leftTokens->type > 0)
+	{
+		HeapTuple	tuple;
+		Datum		result;
+		char	  **values;
+		ParsedLex  *lex = context->leftTokens;
+		StringInfo	str = NULL;
+		TSLexeme   *ptr;
+
+		values = palloc0(sizeof(char *) * 6);
+		str = makeStringInfo();
+		initStringInfo(str);
+
+		values[0] = context->tokenTypes[lex->type - 1].alias;
+		values[1] = context->tokenTypes[lex->type - 1].descr;
+
+		values[2] = palloc0(sizeof(char) * (lex->lenlemm + 1));
+		memcpy(values[2], lex->lemm, sizeof(char) * lex->lenlemm);
+
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			TSMapPrintElement(context->ldata.cfg->map[lex->type], str);
+			values[3] = str->data;
+			str = makeStringInfo();
+			initStringInfo(str);
+
+			if (lex->relatedRule)
+			{
+				TSMapPrintElement(lex->relatedRule, str);
+				values[4] = str->data;
+				str = makeStringInfo();
+				initStringInfo(str);
+			}
+		}
+
+		ptr = context->savedLexemes;
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '{');
+
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr != context->savedLexemes)
+				appendStringInfoString(str, ", ");
+			appendStringInfoString(str, ptr->lexeme);
+			ptr++;
+		}
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '}');
+		if (context->savedLexemes)
+			values[5] = str->data;
+		else
+			values[5] = NULL;
+
+		tuple = BuildTupleFromCStrings(funcctx->attinmeta, values);
+		result = HeapTupleGetDatum(tuple);
+
+		context->leftTokens = lex->next;
+		pfree(lex);
+		if (context->leftTokens == NULL && context->savedLexemes)
+			pfree(context->savedLexemes);
+
+		SRF_RETURN_NEXT(funcctx, result);
+	}
+
+	FunctionCall1(&(context->prsobj->prsend), PointerGetDatum(context->prsdata));
+	SRF_RETURN_DONE(funcctx);
+}
+
+/*-------------------
  * Headline framework
+ *-------------------
  */
+
 static void
 hladdword(HeadlineParsedText *prs, char *buf, int buflen, int type)
 {
@@ -532,12 +1791,12 @@ addHLParsedLex(HeadlineParsedText *prs, TSQuery query, ParsedLex *lexs, TSLexeme
 void
 hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
 	TSLexeme   *norms;
-	ParsedLex  *lexs;
+	ParsedLex  *lexs = NULL;
 	TSConfigCacheEntry *cfg;
 	TSParserCacheEntry *prsobj;
 	void	   *prsdata;
@@ -551,32 +1810,36 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		do
 		{
@@ -587,9 +1850,10 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 			}
 			else
 				addHLParsedLex(prs, query, lexs, NULL);
+			lexs = NULL;
 		} while (norms);
 
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
@@ -642,14 +1906,14 @@ generateHeadline(HeadlineParsedText *prs)
 			}
 			else if (!wrd->skip)
 			{
-				if (wrd->selected)
+				if (wrd->selected && (wrd == prs->words || !(wrd - 1)->selected))
 				{
 					memcpy(ptr, prs->startsel, prs->startsellen);
 					ptr += prs->startsellen;
 				}
 				memcpy(ptr, wrd->word, wrd->len);
 				ptr += wrd->len;
-				if (wrd->selected)
+				if (wrd->selected && ((wrd + 1 - prs->words) == prs->curwords || !(wrd + 1)->selected))
 				{
 					memcpy(ptr, prs->stopsel, prs->stopsellen);
 					ptr += prs->stopsellen;
diff --git a/src/backend/tsearch/ts_utils.c b/src/backend/tsearch/ts_utils.c
index 56d4cf0..068a684 100644
--- a/src/backend/tsearch/ts_utils.c
+++ b/src/backend/tsearch/ts_utils.c
@@ -20,7 +20,6 @@
 #include "tsearch/ts_locale.h"
 #include "tsearch/ts_utils.h"
 
-
 /*
  * Given the base name and extension of a tsearch config file, return
  * its full path name.  The base name is assumed to be user-supplied,
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 888edbb..0628b9c 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -828,11 +828,10 @@ static const struct cachedesc cacheinfo[] = {
 	},
 	{TSConfigMapRelationId,		/* TSCONFIGMAP */
 		TSConfigMapIndexId,
-		3,
+		2,
 		{
 			Anum_pg_ts_config_map_mapcfg,
 			Anum_pg_ts_config_map_maptokentype,
-			Anum_pg_ts_config_map_mapseqno,
 			0
 		},
 		2
diff --git a/src/backend/utils/cache/ts_cache.c b/src/backend/utils/cache/ts_cache.c
index 29cf93a..9adfddc 100644
--- a/src/backend/utils/cache/ts_cache.c
+++ b/src/backend/utils/cache/ts_cache.c
@@ -39,6 +39,7 @@
 #include "catalog/pg_ts_template.h"
 #include "commands/defrem.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/catcache.h"
 #include "utils/fmgroids.h"
@@ -51,13 +52,12 @@
 
 
 /*
- * MAXTOKENTYPE/MAXDICTSPERTT are arbitrary limits on the workspace size
+ * MAXTOKENTYPE is arbitrary limits on the workspace size
  * used in lookup_ts_config_cache().  We could avoid hardwiring a limit
  * by making the workspace dynamically enlargeable, but it seems unlikely
  * to be worth the trouble.
  */
-#define MAXTOKENTYPE	256
-#define MAXDICTSPERTT	100
+#define MAXTOKENTYPE		256
 
 
 static HTAB *TSParserCacheHash = NULL;
@@ -415,11 +415,10 @@ lookup_ts_config_cache(Oid cfgId)
 		ScanKeyData mapskey;
 		SysScanDesc mapscan;
 		HeapTuple	maptup;
-		ListDictionary maplists[MAXTOKENTYPE + 1];
-		Oid			mapdicts[MAXDICTSPERTT];
+		TSMapElement *mapconfigs[MAXTOKENTYPE + 1];
 		int			maxtokentype;
-		int			ndicts;
 		int			i;
+		TSMapElement *tmpConfig;
 
 		tp = SearchSysCache1(TSCONFIGOID, ObjectIdGetDatum(cfgId));
 		if (!HeapTupleIsValid(tp))
@@ -450,8 +449,10 @@ lookup_ts_config_cache(Oid cfgId)
 			if (entry->map)
 			{
 				for (i = 0; i < entry->lenmap; i++)
-					if (entry->map[i].dictIds)
-						pfree(entry->map[i].dictIds);
+				{
+					if (entry->map[i])
+						TSMapElementFree(entry->map[i]);
+				}
 				pfree(entry->map);
 			}
 		}
@@ -465,13 +466,11 @@ lookup_ts_config_cache(Oid cfgId)
 		/*
 		 * Scan pg_ts_config_map to gather dictionary list for each token type
 		 *
-		 * Because the index is on (mapcfg, maptokentype, mapseqno), we will
-		 * see the entries in maptokentype order, and in mapseqno order for
-		 * each token type, even though we didn't explicitly ask for that.
+		 * Because the index is on (mapcfg, maptokentype), we will see the
+		 * entries in maptokentype order even though we didn't explicitly ask
+		 * for that.
 		 */
-		MemSet(maplists, 0, sizeof(maplists));
 		maxtokentype = 0;
-		ndicts = 0;
 
 		ScanKeyInit(&mapskey,
 					Anum_pg_ts_config_map_mapcfg,
@@ -483,6 +482,7 @@ lookup_ts_config_cache(Oid cfgId)
 		mapscan = systable_beginscan_ordered(maprel, mapidx,
 											 NULL, 1, &mapskey);
 
+		memset(mapconfigs, 0, sizeof(mapconfigs));
 		while ((maptup = systable_getnext_ordered(mapscan, ForwardScanDirection)) != NULL)
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
@@ -492,51 +492,27 @@ lookup_ts_config_cache(Oid cfgId)
 				elog(ERROR, "maptokentype value %d is out of range", toktype);
 			if (toktype < maxtokentype)
 				elog(ERROR, "maptokentype entries are out of order");
-			if (toktype > maxtokentype)
-			{
-				/* starting a new token type, but first save the prior data */
-				if (ndicts > 0)
-				{
-					maplists[maxtokentype].len = ndicts;
-					maplists[maxtokentype].dictIds = (Oid *)
-						MemoryContextAlloc(CacheMemoryContext,
-										   sizeof(Oid) * ndicts);
-					memcpy(maplists[maxtokentype].dictIds, mapdicts,
-						   sizeof(Oid) * ndicts);
-				}
-				maxtokentype = toktype;
-				mapdicts[0] = cfgmap->mapdict;
-				ndicts = 1;
-			}
-			else
-			{
-				/* continuing data for current token type */
-				if (ndicts >= MAXDICTSPERTT)
-					elog(ERROR, "too many pg_ts_config_map entries for one token type");
-				mapdicts[ndicts++] = cfgmap->mapdict;
-			}
+
+			maxtokentype = toktype;
+			tmpConfig = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			mapconfigs[maxtokentype] = TSMapMoveToMemoryContext(tmpConfig, CacheMemoryContext);
+			TSMapElementFree(tmpConfig);
+			tmpConfig = NULL;
 		}
 
 		systable_endscan_ordered(mapscan);
 		index_close(mapidx, AccessShareLock);
 		heap_close(maprel, AccessShareLock);
 
-		if (ndicts > 0)
+		if (maxtokentype > 0)
 		{
-			/* save the last token type's dictionaries */
-			maplists[maxtokentype].len = ndicts;
-			maplists[maxtokentype].dictIds = (Oid *)
-				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(Oid) * ndicts);
-			memcpy(maplists[maxtokentype].dictIds, mapdicts,
-				   sizeof(Oid) * ndicts);
-			/* and save the overall map */
+			/* save the overall map */
 			entry->lenmap = maxtokentype + 1;
-			entry->map = (ListDictionary *)
+			entry->map = (TSMapElement * *)
 				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(ListDictionary) * entry->lenmap);
-			memcpy(entry->map, maplists,
-				   sizeof(ListDictionary) * entry->lenmap);
+								   sizeof(TSMapElement *) * entry->lenmap);
+			memcpy(entry->map, mapconfigs,
+				   sizeof(TSMapElement *) * entry->lenmap);
 		}
 
 		entry->isvalid = true;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e6701aa..7e8dd00 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -14208,10 +14208,11 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 					  "SELECT\n"
 					  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
 					  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
-					  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
+					  "  dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS dictname\n"
 					  "FROM pg_catalog.pg_ts_config_map AS m\n"
 					  "WHERE m.mapcfg = '%u'\n"
-					  "ORDER BY m.mapcfg, m.maptokentype, m.mapseqno",
+					  "GROUP BY m.mapcfg, m.maptokentype\n"
+					  "ORDER BY m.mapcfg, m.maptokentype",
 					  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -14225,20 +14226,14 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 		char	   *tokenname = PQgetvalue(res, i, i_tokenname);
 		char	   *dictname = PQgetvalue(res, i, i_dictname);
 
-		if (i == 0 ||
-			strcmp(tokenname, PQgetvalue(res, i - 1, i_tokenname)) != 0)
-		{
-			/* starting a new token type, so start a new command */
-			if (i > 0)
-				appendPQExpBufferStr(q, ";\n");
-			appendPQExpBuffer(q, "\nALTER TEXT SEARCH CONFIGURATION %s\n",
-							  fmtId(cfginfo->dobj.name));
-			/* tokenname needs quoting, dictname does NOT */
-			appendPQExpBuffer(q, "    ADD MAPPING FOR %s WITH %s",
-							  fmtId(tokenname), dictname);
-		}
-		else
-			appendPQExpBuffer(q, ", %s", dictname);
+		/* starting a new token type, so start a new command */
+		if (i > 0)
+			appendPQExpBufferStr(q, ";\n");
+		appendPQExpBuffer(q, "\nALTER TEXT SEARCH CONFIGURATION %s\n",
+						  fmtId(cfginfo->dobj.name));
+		/* tokenname needs quoting, dictname does NOT */
+		appendPQExpBuffer(q, "    ADD MAPPING FOR %s WITH %s",
+						  fmtId(tokenname), dictname);
 	}
 
 	if (ntups > 0)
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 3fc69c4..279fc2d 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -4605,13 +4605,7 @@ describeOneTSConfig(const char *oid, const char *nspname, const char *cfgname,
 					  "  ( SELECT t.alias FROM\n"
 					  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
 					  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
-					  "  pg_catalog.btrim(\n"
-					  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
-					  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
-					  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
-					  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
-					  "    ) :: pg_catalog.text,\n"
-					  "  '{}') AS \"%s\"\n"
+					  " dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS \"%s\"\n"
 					  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
 					  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
 					  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h
index b13cf62..47f7f66 100644
--- a/src/include/catalog/catversion.h
+++ b/src/include/catalog/catversion.h
@@ -53,6 +53,6 @@
  */
 
 /*							yyyymmddN */
-#define CATALOG_VERSION_NO	201711301
+#define CATALOG_VERSION_NO	201712191
 
 #endif
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index ef84936..db487cf 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -260,7 +260,7 @@ DECLARE_UNIQUE_INDEX(pg_ts_config_cfgname_index, 3608, on pg_ts_config using btr
 DECLARE_UNIQUE_INDEX(pg_ts_config_oid_index, 3712, on pg_ts_config using btree(oid oid_ops));
 #define TSConfigOidIndexId	3712
 
-DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops, mapseqno int4_ops));
+DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops));
 #define TSConfigMapIndexId	3609
 
 DECLARE_UNIQUE_INDEX(pg_ts_dict_dictname_index, 3604, on pg_ts_dict using btree(dictname name_ops, dictnamespace oid_ops));
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index c969375..1b32bd7 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -4925,6 +4925,12 @@ DESCR("transform jsonb to tsvector");
 DATA(insert OID = 4212 (  to_tsvector		PGNSP PGUID 12 100 0 0 0 f f f f t f i s 2 0 3614 "3734 114" _null_ _null_ _null_ _null_ _null_ json_to_tsvector_byid _null_ _null_ _null_ ));
 DESCR("transform json to tsvector");
 
+DATA(insert OID = 8891 (  dictionary_mapping_to_text	PGNSP PGUID 12 100 0 0 0 f f f f t f s s 2 0 25 "26 23" _null_ _null_ _null_ _null_ _null_ dictionary_mapping_to_text _null_ _null_ _null_ ));
+DESCR("returns text representation of dictionary configurationconfiguration  map");
+
+DATA(insert OID = 8892 (  ts_debug			PGNSP PGUID 12 100 1 0 0 f f f f t t s s 2 0 2249 "3734 25" "{3734,25,25,25,25,25,25,1009}" "{i,i,o,o,o,o,o,o}" "{cfgId,inputText,alias,description,token,dictionaries,command,lexemes}" _null_ _null_ ts_debug _null_ _null_ _null_));
+DESCR("debug function for text search configuration");
+
 DATA(insert OID = 3752 (  tsvector_update_trigger			PGNSP PGUID 12 1 0 0 0 f f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_byid _null_ _null_ _null_ ));
 DESCR("trigger for automatic update of tsvector column");
 DATA(insert OID = 3753 (  tsvector_update_trigger_column	PGNSP PGUID 12 1 0 0 0 f f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_bycolumn _null_ _null_ _null_ ));
diff --git a/src/include/catalog/pg_ts_config_map.h b/src/include/catalog/pg_ts_config_map.h
index 3df0519..f6790d2 100644
--- a/src/include/catalog/pg_ts_config_map.h
+++ b/src/include/catalog/pg_ts_config_map.h
@@ -22,6 +22,7 @@
 #define PG_TS_CONFIG_MAP_H
 
 #include "catalog/genbki.h"
+#include "utils/jsonb.h"
 
 /* ----------------
  *		pg_ts_config_map definition.  cpp turns this into
@@ -30,49 +31,98 @@
  */
 #define TSConfigMapRelationId	3603
 
+/* Create a typedef in order to use same type name in
+ * generated DB initialization script and C source code
+ */
+typedef Jsonb jsonb;
+
 CATALOG(pg_ts_config_map,3603) BKI_WITHOUT_OIDS
 {
 	Oid			mapcfg;			/* OID of configuration owning this entry */
 	int32		maptokentype;	/* token type from parser */
-	int32		mapseqno;		/* order in which to consult dictionaries */
-	Oid			mapdict;		/* dictionary to consult */
+	jsonb		mapdicts;		/* dictionary map Jsonb representation */
 } FormData_pg_ts_config_map;
 
 typedef FormData_pg_ts_config_map *Form_pg_ts_config_map;
 
+typedef struct TSMapElement
+{
+	int			type;
+	union
+	{
+		struct TSMapExpression *objectExpression;
+		struct TSMapCase *objectCase;
+		Oid			objectDictionary;
+		void	   *object;
+	}			value;
+	struct TSMapElement *parent;
+} TSMapElement;
+
+typedef struct TSMapExpression
+{
+	int			operator;
+	TSMapElement *left;
+	TSMapElement *right;
+} TSMapExpression;
+
+typedef struct TSMapCase
+{
+	TSMapElement *condition;
+	TSMapElement *command;
+	TSMapElement *elsebranch;
+	bool		match;	/* If false, NO MATCH is used */
+} TSMapCase;
+
 /* ----------------
- *		compiler constants for pg_ts_config_map
+ *		Compiler constants for pg_ts_config_map
  * ----------------
  */
-#define Natts_pg_ts_config_map				4
+#define Natts_pg_ts_config_map				3
 #define Anum_pg_ts_config_map_mapcfg		1
 #define Anum_pg_ts_config_map_maptokentype	2
-#define Anum_pg_ts_config_map_mapseqno		3
-#define Anum_pg_ts_config_map_mapdict		4
+#define Anum_pg_ts_config_map_mapdicts		3
+
+/* ----------------
+ *		Dictionary map operators
+ * ----------------
+ */
+#define TSMAP_OP_MAP			1
+#define TSMAP_OP_UNION			2
+#define TSMAP_OP_EXCEPT			3
+#define TSMAP_OP_INTERSECT		4
+
+/* ----------------
+ *		TSMapElement object types
+ * ----------------
+ */
+#define TSMAP_EXPRESSION	1
+#define TSMAP_CASE			2
+#define TSMAP_DICTIONARY	3
+#define TSMAP_KEEP			4
 
 /* ----------------
  *		initial contents of pg_ts_config_map
  * ----------------
  */
 
-DATA(insert ( 3748	1	1	3765 ));
-DATA(insert ( 3748	2	1	3765 ));
-DATA(insert ( 3748	3	1	3765 ));
-DATA(insert ( 3748	4	1	3765 ));
-DATA(insert ( 3748	5	1	3765 ));
-DATA(insert ( 3748	6	1	3765 ));
-DATA(insert ( 3748	7	1	3765 ));
-DATA(insert ( 3748	8	1	3765 ));
-DATA(insert ( 3748	9	1	3765 ));
-DATA(insert ( 3748	10	1	3765 ));
-DATA(insert ( 3748	11	1	3765 ));
-DATA(insert ( 3748	15	1	3765 ));
-DATA(insert ( 3748	16	1	3765 ));
-DATA(insert ( 3748	17	1	3765 ));
-DATA(insert ( 3748	18	1	3765 ));
-DATA(insert ( 3748	19	1	3765 ));
-DATA(insert ( 3748	20	1	3765 ));
-DATA(insert ( 3748	21	1	3765 ));
-DATA(insert ( 3748	22	1	3765 ));
+DATA(insert ( 3748	1	"[3765]" ));
+DATA(insert ( 3748	2	"[3765]" ));
+DATA(insert ( 3748	3	"[3765]" ));
+DATA(insert ( 3748	4	"[3765]" ));
+DATA(insert ( 3748	5	"[3765]" ));
+DATA(insert ( 3748	6	"[3765]" ));
+DATA(insert ( 3748	7	"[3765]" ));
+DATA(insert ( 3748	8	"[3765]" ));
+DATA(insert ( 3748	9	"[3765]" ));
+DATA(insert ( 3748	10	"[3765]" ));
+DATA(insert ( 3748	11	"[3765]" ));
+DATA(insert ( 3748	15	"[3765]" ));
+DATA(insert ( 3748	16	"[3765]" ));
+DATA(insert ( 3748	17	"[3765]" ));
+DATA(insert ( 3748	18	"[3765]" ));
+DATA(insert ( 3748	19	"[3765]" ));
+DATA(insert ( 3748	20	"[3765]" ));
+DATA(insert ( 3748	21	"[3765]" ));
+DATA(insert ( 3748	22	"[3765]" ));
 
 #endif							/* PG_TS_CONFIG_MAP_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index c5b5115..63dd5dc 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -381,6 +381,9 @@ typedef enum NodeTag
 	T_CreateEnumStmt,
 	T_CreateRangeStmt,
 	T_AlterEnumStmt,
+	T_DictMapExprElem,
+	T_DictMapElem,
+	T_DictMapCase,
 	T_AlterTSDictionaryStmt,
 	T_AlterTSConfigurationStmt,
 	T_CreateFdwStmt,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 2eaa6b2..f4593fb 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3392,6 +3392,39 @@ typedef enum AlterTSConfigType
 	ALTER_TSCONFIG_DROP_MAPPING
 } AlterTSConfigType;
 
+typedef enum DictMapElemType
+{
+	DICT_MAP_CASE,
+	DICT_MAP_EXPRESSION,
+	DICT_MAP_KEEP,
+	DICT_MAP_DICTIONARY,
+	DICT_MAP_DICTIONARY_LIST
+} DictMapElemType;
+
+typedef struct DictMapElem
+{
+	NodeTag		type;
+	int8		kind;			/* See DictMapElemType */
+	void	   *data;			/* Type should be detected by kind value */
+} DictMapElem;
+
+typedef struct DictMapExprElem
+{
+	NodeTag		type;
+	DictMapElem *left;
+	DictMapElem *right;
+	int8		oper;
+} DictMapExprElem;
+
+typedef struct DictMapCase
+{
+	NodeTag		type;
+	struct DictMapElem *condition;
+	struct DictMapElem *command;
+	struct DictMapElem *elsebranch;
+	bool		match;
+} DictMapCase;
+
 typedef struct AlterTSConfigurationStmt
 {
 	NodeTag		type;
@@ -3404,6 +3437,7 @@ typedef struct AlterTSConfigurationStmt
 	 */
 	List	   *tokentype;		/* list of Value strings */
 	List	   *dicts;			/* list of list of Value strings */
+	DictMapElem *dict_map;
 	bool		override;		/* if true - remove old variant */
 	bool		replace;		/* if true - replace dictionary by another */
 	bool		missing_ok;		/* for DROP - skip error if missing? */
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index a932400..b409f0c 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -219,6 +219,7 @@ PG_KEYWORD("is", IS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isnull", ISNULL, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isolation", ISOLATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("join", JOIN, TYPE_FUNC_NAME_KEYWORD)
+PG_KEYWORD("keep", KEEP, RESERVED_KEYWORD)
 PG_KEYWORD("key", KEY, UNRESERVED_KEYWORD)
 PG_KEYWORD("label", LABEL, UNRESERVED_KEYWORD)
 PG_KEYWORD("language", LANGUAGE, UNRESERVED_KEYWORD)
@@ -241,6 +242,7 @@ PG_KEYWORD("location", LOCATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("lock", LOCK_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("locked", LOCKED, UNRESERVED_KEYWORD)
 PG_KEYWORD("logged", LOGGED, UNRESERVED_KEYWORD)
+PG_KEYWORD("map", MAP, UNRESERVED_KEYWORD)
 PG_KEYWORD("mapping", MAPPING, UNRESERVED_KEYWORD)
 PG_KEYWORD("match", MATCH, UNRESERVED_KEYWORD)
 PG_KEYWORD("materialized", MATERIALIZED, UNRESERVED_KEYWORD)
diff --git a/src/include/tsearch/ts_cache.h b/src/include/tsearch/ts_cache.h
index abff0fd..fe1e7bd 100644
--- a/src/include/tsearch/ts_cache.h
+++ b/src/include/tsearch/ts_cache.h
@@ -14,6 +14,7 @@
 #define TS_CACHE_H
 
 #include "utils/guc.h"
+#include "catalog/pg_ts_config_map.h"
 
 
 /*
@@ -66,6 +67,7 @@ typedef struct
 {
 	int			len;
 	Oid		   *dictIds;
+	int32	   *dictOptions;
 } ListDictionary;
 
 typedef struct
@@ -77,7 +79,7 @@ typedef struct
 	Oid			prsId;
 
 	int			lenmap;
-	ListDictionary *map;
+	TSMapElement **map;
 } TSConfigCacheEntry;
 
 
diff --git a/src/include/tsearch/ts_configmap.h b/src/include/tsearch/ts_configmap.h
new file mode 100644
index 0000000..3c3323f
--- /dev/null
+++ b/src/include/tsearch/ts_configmap.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.h
+ *	  internal represtation of text search configuration and utilities for it
+ *
+ * Copyright (c) 1998-2017, PostgreSQL Global Development Group
+ *
+ * src/include/tsearch/ts_utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PG_TS_CONFIGMAP_H_
+#define _PG_TS_CONFIGMAP_H_
+
+#include "utils/jsonb.h"
+#include "catalog/pg_ts_config_map.h"
+
+/*
+ * Configuration storage functions
+ * Provide interface to convert ts_configuration into JSONB and vice versa
+ */
+
+/* Convert TSMapElement structure into JSONB */
+extern Jsonb *TSMapToJsonb(TSMapElement *config);
+
+/* Extract TSMapElement from JSONB formated data */
+extern TSMapElement *JsonbToTSMap(Jsonb *json);
+/* Replace all occurances of oldDict by newDict */
+extern void TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict);
+
+/* Move rule list into specified memory context */
+extern TSMapElement *TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context);
+/* Free all nodes of the rule list */
+extern void TSMapElementFree(TSMapElement *element);
+
+/* Print map in human-readable format */
+extern void TSMapPrintElement(TSMapElement *config, StringInfo result);
+
+/* Return all dictionaries used in config */
+extern Oid *TSMapGetDictionaries(TSMapElement *config);
+
+/* Do a deep comparison of two TSMapElements. Doesn't check parents of elements */
+extern bool TSMapElementEquals(TSMapElement *a, TSMapElement *b);
+
+#endif							/* _PG_TS_CONFIGMAP_H_ */
diff --git a/src/include/tsearch/ts_public.h b/src/include/tsearch/ts_public.h
index 94ba7fc..7230968 100644
--- a/src/include/tsearch/ts_public.h
+++ b/src/include/tsearch/ts_public.h
@@ -115,6 +115,7 @@ typedef struct
 #define TSL_ADDPOS		0x01
 #define TSL_PREFIX		0x02
 #define TSL_FILTER		0x04
+#define TSL_MULTI		0x08
 
 /*
  * Struct for supporting complex dictionaries like thesaurus.
diff --git a/src/test/regress/expected/oidjoins.out b/src/test/regress/expected/oidjoins.out
index 234b44f..40029f3 100644
--- a/src/test/regress/expected/oidjoins.out
+++ b/src/test/regress/expected/oidjoins.out
@@ -1081,14 +1081,6 @@ WHERE	mapcfg != 0 AND
 ------+--------
 (0 rows)
 
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
- ctid | mapdict 
-------+---------
-(0 rows)
-
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/expected/tsdicts.out b/src/test/regress/expected/tsdicts.out
index 0744ef8..f7d966f 100644
--- a/src/test/regress/expected/tsdicts.out
+++ b/src/test/regress/expected/tsdicts.out
@@ -420,6 +420,105 @@ SELECT ts_lexize('thesaurus', 'one');
  {1}
 (1 row)
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+SELECT to_tsvector('english_union', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'books');
+    to_tsvector     
+--------------------
+ 'book':1 'books':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+SELECT to_tsvector('english_intersect', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'books');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'booking');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+SELECT to_tsvector('english_except', 'book');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_except', 'books');
+ to_tsvector 
+-------------
+ 'books':1
+(1 row)
+
+SELECT to_tsvector('english_except', 'booking');
+ to_tsvector 
+-------------
+ 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+SELECT to_tsvector('english_branches', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -580,3 +679,55 @@ SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a
  'card':3,10 'invit':2,9 'like':6 'look':5 'order':1,8
 (1 row)
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                                     to_tsvector                                      
+--------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+              to_tsvector              
+---------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                     to_tsvector                     
+-----------------------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5 'supernova':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                              to_tsvector                               
+------------------------------------------------------------------------
+ '1987a':6 'mysterious':2 'of':4 'rings':3 'sn':5 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+      to_tsvector       
+------------------------
+ '12':1 'one':1 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+            to_tsvector            
+-----------------------------------
+ '123':1 'one':1 'three':3 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+           to_tsvector           
+---------------------------------
+ '12':1 'four':3 'one':1 'two':2
+(1 row)
+
diff --git a/src/test/regress/expected/tsearch.out b/src/test/regress/expected/tsearch.out
index d63fb12..5b6fe73 100644
--- a/src/test/regress/expected/tsearch.out
+++ b/src/test/regress/expected/tsearch.out
@@ -36,11 +36,11 @@ WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 -----+---------
 (0 rows)
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
- mapcfg | maptokentype | mapseqno 
---------+--------------+----------
+WHERE mapcfg = 0;
+ mapcfg | maptokentype 
+--------+--------------
 (0 rows)
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
@@ -51,8 +51,8 @@ RIGHT JOIN pg_ts_config_map AS m
     ON (tt.cfgid=m.mapcfg AND tt.tokid=m.maptokentype)
 WHERE
     tt.cfgid IS NULL OR tt.tokid IS NULL;
- cfgid | tokid | mapcfg | maptokentype | mapseqno | mapdict 
--------+-------+--------+--------------+----------+---------
+ cfgid | tokid | mapcfg | maptokentype | mapdicts 
+-------+-------+--------+--------------+----------
 (0 rows)
 
 -- test basic text search behavior without indexes, then with
@@ -567,66 +567,65 @@ SELECT length(to_tsvector('english', '345 qwe@efd.r '' http://www.com/ http://ae
 
 -- ts_debug
 SELECT * from ts_debug('english', '<myns:foo-bar_baz.blurfl>abc&nm1;def&#xa9;ghi&#245;jkl</myns:foo-bar_baz.blurfl>');
-   alias   |   description   |           token            |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+----------------------------+----------------+--------------+---------
- tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |              | 
- asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem | {abc}
- entity    | XML entity      | &nm1;                      | {}             |              | 
- asciiword | Word, all ASCII | def                        | {english_stem} | english_stem | {def}
- entity    | XML entity      | &#xa9;                     | {}             |              | 
- asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem | {ghi}
- entity    | XML entity      | &#245;                     | {}             |              | 
- asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem | {jkl}
- tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |              | 
+   alias   |   description   |           token            | dictionaries |   command    | lexemes 
+-----------+-----------------+----------------------------+--------------+--------------+---------
+ tag       | XML tag         | <myns:foo-bar_baz.blurfl>  |              |              | 
+ asciiword | Word, all ASCII | abc                        | english_stem | english_stem | {abc}
+ entity    | XML entity      | &nm1;                      |              |              | 
+ asciiword | Word, all ASCII | def                        | english_stem | english_stem | {def}
+ entity    | XML entity      | &#xa9;                     |              |              | 
+ asciiword | Word, all ASCII | ghi                        | english_stem | english_stem | {ghi}
+ entity    | XML entity      | &#245;                     |              |              | 
+ asciiword | Word, all ASCII | jkl                        | english_stem | english_stem | {jkl}
+ tag       | XML tag         | </myns:foo-bar_baz.blurfl> |              |              | 
 (9 rows)
 
 -- check parsing of URLs
 SELECT * from ts_debug('english', 'http://www.harewoodsolutions.co.uk/press.aspx</span>');
-  alias   |  description  |                 token                  | dictionaries | dictionary |                 lexemes                  
-----------+---------------+----------------------------------------+--------------+------------+------------------------------------------
- protocol | Protocol head | http://                                | {}           |            | 
- url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple     | {www.harewoodsolutions.co.uk/press.aspx}
- host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple     | {www.harewoodsolutions.co.uk}
- url_path | URL path      | /press.aspx                            | {simple}     | simple     | {/press.aspx}
- tag      | XML tag       | </span>                                | {}           |            | 
+  alias   |  description  |                 token                  | dictionaries | command |                 lexemes                  
+----------+---------------+----------------------------------------+--------------+---------+------------------------------------------
+ protocol | Protocol head | http://                                |              |         | 
+ url      | URL           | www.harewoodsolutions.co.uk/press.aspx | simple       | simple  | {www.harewoodsolutions.co.uk/press.aspx}
+ host     | Host          | www.harewoodsolutions.co.uk            | simple       | simple  | {www.harewoodsolutions.co.uk}
+ url_path | URL path      | /press.aspx                            | simple       | simple  | {/press.aspx}
+ tag      | XML tag       | </span>                                |              |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
-  alias   |  description  |           token            | dictionaries | dictionary |           lexemes            
-----------+---------------+----------------------------+--------------+------------+------------------------------
- protocol | Protocol head | http://                    | {}           |            | 
- url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple     | {aew.wer0c.ewr/id?ad=qwe&dw}
- host     | Host          | aew.wer0c.ewr              | {simple}     | simple     | {aew.wer0c.ewr}
- url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple     | {/id?ad=qwe&dw}
- tag      | XML tag       | <span>                     | {}           |            | 
+  alias   |  description  |           token            | dictionaries | command |           lexemes            
+----------+---------------+----------------------------+--------------+---------+------------------------------
+ protocol | Protocol head | http://                    |              |         | 
+ url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | simple       | simple  | {aew.wer0c.ewr/id?ad=qwe&dw}
+ host     | Host          | aew.wer0c.ewr              | simple       | simple  | {aew.wer0c.ewr}
+ url_path | URL path      | /id?ad=qwe&dw              | simple       | simple  | {/id?ad=qwe&dw}
+ tag      | XML tag       | <span>                     |              |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
-  alias   |  description  |        token         | dictionaries | dictionary |        lexemes         
-----------+---------------+----------------------+--------------+------------+------------------------
- protocol | Protocol head | http://              | {}           |            | 
- url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple     | {5aew.werc.ewr:8100/?}
- host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path      | /?                   | {simple}     | simple     | {/?}
+  alias   |  description  |        token         | dictionaries | command |        lexemes         
+----------+---------------+----------------------+--------------+---------+------------------------
+ protocol | Protocol head | http://              |              |         | 
+ url      | URL           | 5aew.werc.ewr:8100/? | simple       | simple  | {5aew.werc.ewr:8100/?}
+ host     | Host          | 5aew.werc.ewr:8100   | simple       | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path      | /?                   | simple       | simple  | {/?}
 (4 rows)
 
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
-  alias   | description |         token          | dictionaries | dictionary |         lexemes          
-----------+-------------+------------------------+--------------+------------+--------------------------
- url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple     | {5aew.werc.ewr:8100/?xx}
- host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path    | /?xx                   | {simple}     | simple     | {/?xx}
+  alias   | description |         token          | dictionaries | command |         lexemes          
+----------+-------------+------------------------+--------------+---------+--------------------------
+ url      | URL         | 5aew.werc.ewr:8100/?xx | simple       | simple  | {5aew.werc.ewr:8100/?xx}
+ host     | Host        | 5aew.werc.ewr:8100     | simple       | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path    | /?xx                   | simple       | simple  | {/?xx}
 (3 rows)
 
 SELECT token, alias,
-  dictionaries, dictionaries is null as dnull, array_dims(dictionaries) as ddims,
-  lexemes, lexemes is null as lnull, array_dims(lexemes) as ldims
+  dictionaries, lexemes, lexemes is null as lnull, array_dims(lexemes) as ldims
 from ts_debug('english', 'a title');
- token |   alias   |  dictionaries  | dnull | ddims | lexemes | lnull | ldims 
--------+-----------+----------------+-------+-------+---------+-------+-------
- a     | asciiword | {english_stem} | f     | [1:1] | {}      | f     | 
-       | blank     | {}             | f     |       |         | t     | 
- title | asciiword | {english_stem} | f     | [1:1] | {titl}  | f     | [1:1]
+ token |   alias   | dictionaries | lexemes | lnull | ldims 
+-------+-----------+--------------+---------+-------+-------
+ a     | asciiword | english_stem | {}      | f     | 
+       | blank     |              |         | t     | 
+ title | asciiword | english_stem | {titl}  | f     | [1:1]
 (3 rows)
 
 -- to_tsquery
diff --git a/src/test/regress/sql/oidjoins.sql b/src/test/regress/sql/oidjoins.sql
index fcf9990..320e220 100644
--- a/src/test/regress/sql/oidjoins.sql
+++ b/src/test/regress/sql/oidjoins.sql
@@ -541,10 +541,6 @@ SELECT	ctid, mapcfg
 FROM	pg_catalog.pg_ts_config_map fk
 WHERE	mapcfg != 0 AND
 	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_config pk WHERE pk.oid = fk.mapcfg);
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/sql/tsdicts.sql b/src/test/regress/sql/tsdicts.sql
index a5a569e..3f7df28 100644
--- a/src/test/regress/sql/tsdicts.sql
+++ b/src/test/regress/sql/tsdicts.sql
@@ -117,6 +117,57 @@ CREATE TEXT SEARCH DICTIONARY thesaurus (
 
 SELECT ts_lexize('thesaurus', 'one');
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+
+SELECT to_tsvector('english_union', 'book');
+SELECT to_tsvector('english_union', 'books');
+SELECT to_tsvector('english_union', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+
+SELECT to_tsvector('english_intersect', 'book');
+SELECT to_tsvector('english_intersect', 'books');
+SELECT to_tsvector('english_intersect', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+
+SELECT to_tsvector('english_except', 'book');
+SELECT to_tsvector('english_except', 'books');
+SELECT to_tsvector('english_except', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+
+SELECT to_tsvector('english_branches', 'book');
+SELECT to_tsvector('english_branches', 'books');
+SELECT to_tsvector('english_branches', 'booking');
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -188,3 +239,25 @@ ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR
 SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one');
 SELECT to_tsvector('thesaurus_tst', 'Supernovae star is very new star and usually called supernovae (abbreviation SN)');
 SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a tickets');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two four');
diff --git a/src/test/regress/sql/tsearch.sql b/src/test/regress/sql/tsearch.sql
index 1c8520b..8ef3d71 100644
--- a/src/test/regress/sql/tsearch.sql
+++ b/src/test/regress/sql/tsearch.sql
@@ -26,9 +26,9 @@ SELECT oid, cfgname
 FROM pg_ts_config
 WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
+WHERE mapcfg = 0;
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
 SELECT * FROM
@@ -146,8 +146,7 @@ SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
 SELECT token, alias,
-  dictionaries, dictionaries is null as dnull, array_dims(dictionaries) as ddims,
-  lexemes, lexemes is null as lnull, array_dims(lexemes) as ldims
+  dictionaries, lexemes, lexemes is null as lnull, array_dims(lexemes) as ldims
 from ts_debug('english', 'a title');
 
 -- to_tsquery
0001-flexible-fts-configuration-v3-readme.mdtext/markdownDownload
#11Arthur Zakirov
a.zakirov@postgrespro.ru
In reply to: Aleksandr Parfenov (#10)
Re: [HACKERS] Flexible configuration for full-text search

Hello,

On Tue, Dec 19, 2017 at 05:31:09PM +0300, Aleksandr Parfenov wrote:

The new version of the patch is in attachment as well as a
little README file with a description of changes in each file. Any
feedback is welcome.

I looked the patch a little bit. The patch is applied and tests passed.

I noticed that there are typos in the documentation. And I think it is necessary to keep information about previous sintax. The syntax will be supported anyway. For example, information about TSL_FILTER was removed. And it will be good to add more examples of the new syntax.

The result of ts_debug() function was changed. Is it possible to keep the old ts_debug() result? To be specific, 'dictionaries' column is text now, not array, as I understood. It will be good to keep the result for the sake of backward compatibility.

--
Arthur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

#12Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Arthur Zakirov (#11)
2 attachment(s)
Re: [HACKERS] Flexible configuration for full-text search

Hi Arthur,

Thank you for the review.

On Thu, 21 Dec 2017 17:46:42 +0300
Arthur Zakirov <a.zakirov@postgrespro.ru> wrote:

I noticed that there are typos in the documentation. And I think it
is necessary to keep information about previous sintax. The syntax
will be supported anyway. For example, information about TSL_FILTER
was removed. And it will be good to add more examples of the new
syntax.

In the current version of the patch, configurations written in old
syntax are rewritten into the same configuration in the new syntax.
Since new syntax doesn't support a TSL_FILTER, it was removed from the
documentation. It is possible to store configurations written in old
syntax in a special way and simulate a TSL_FILTER behavior for them.
But it will lead to maintenance of two different behavior of the FTS
depends on a version of the syntax used during configuration. Do you
think we should keep both behaviors?

The result of ts_debug() function was changed. Is it possible to keep
the old ts_debug() result? To be specific, 'dictionaries' column is
text now, not array, as I understood. It will be good to keep the
result for the sake of backward compatibility.

Columns' 'dictionaries' and 'dictionary' type were changed to text
because after the patch the configuration may be not a plain array of
dictionaries but a complex expression tree. In the column
'dictionaries' the result is textual representation of configuration
and it is the same as a result of \dF+ description of the configuration.

I decide to rename newly added column to 'configuration' and keep
column 'dictionaries' with an array of all dictionaries used in
configuration (no matter how). Also, I fixed a bug in 'command' output
of the ts_debug in some cases.

Additionally, I added some examples to documentation regarding
multilingual search and combination of exact and linguistic-aware
search and fixed typos.

Attachments:

0001-flexible-fts-configuration-v4.patchtext/x-patchDownload
diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml
index ebe0b94b27..a1f483e10b 100644
--- a/doc/src/sgml/ref/alter_tsconfig.sgml
+++ b/doc/src/sgml/ref/alter_tsconfig.sgml
@@ -21,8 +21,12 @@ PostgreSQL documentation
 
  <refsynopsisdiv>
 <synopsis>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
@@ -88,6 +92,17 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
     </listitem>
    </varlistentry>
 
+   <varlistentry>
+    <term><replaceable class="parameter">config</replaceable></term>
+    <listitem>
+     <para>
+      The dictionaries tree expression. The dictionary expression
+      is a triple of condition/command/else that define way to process
+      the text. The <literal>ELSE</literal> part is optional.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry>
     <term><replaceable class="parameter">old_dictionary</replaceable></term>
     <listitem>
@@ -133,7 +148,7 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
      </para>
     </listitem>
    </varlistentry>
- </variablelist>
+  </variablelist>
 
   <para>
    The <literal>ADD MAPPING FOR</literal> form installs a list of dictionaries to be
@@ -154,6 +169,53 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
 
  </refsect1>
 
+ <refsect1>
+  <title>Dictionaries Map Config</title>
+
+  <refsect2>
+   <title>Format</title>
+   <para>
+    Formally <replaceable class="parameter">config</replaceable> is one of:
+   </para>
+   <programlisting>
+    * dictionary_name
+
+    * config { UNION | INTERSECT | EXCEPT | MAP } config
+
+    * CASE config
+        WHEN [ NO ] MATCH THEN { KEEP | config }
+        [ ELSE config ]
+      END
+   </programlisting>
+  </refsect2>
+
+  <refsect2>
+   <title>Description</title>
+   <para>
+    <replaceable class="parameter">config</replaceable> can be used
+    in three different formats. The most simple format is name of dictionary to
+    use for tokens processing.
+   </para>
+   <para>
+    In order to use more than one dictionary
+    simultaneously user should interconnect dictionaries by operators. Operators
+    <literal>UNION</literal>, <literal>EXCEPT</literal> and
+    <literal>INTERSECT</literal> have same meaning as in operations on sets.
+    Special operator <literal>MAP</literal> gets output of left subexpression
+    and uses it as an input to right subexpression.
+   </para>
+   <para>
+    The third format of <replaceable class="parameter">config</replaceable> is similar to
+    <literal>CASE/WHEN/THEN/ELSE</literal> structure. It's consists of three
+    replaceable parts. First one is configuration which is used to construct lexemes set
+    for matching condition. If the condition is triggered, the command is executed.
+    Use command <literal>KEEP</literal> to avoid repeating of the same
+    configuration in condition and command part. However, command may differ from
+    the condition. The <literal>ELSE</literal> branch is executed otherwise.
+   </para>
+  </refsect2>
+ </refsect1>
+
  <refsect1>
   <title>Examples</title>
 
@@ -167,6 +229,34 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
 ALTER TEXT SEARCH CONFIGURATION my_config
   ALTER MAPPING REPLACE english WITH swedish;
 </programlisting>
+
+  <para>
+   Next example shows how to analyse documents in both English and German languages.
+   <literal>english_hunspell</literal> and <literal>german_hunspell</literal>
+   return result only if a word is recognized. Otherwise, stemmer dictionaries
+   are used to process a token.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH
+   CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+    UNION
+   CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+    In order to combine search for both exact and processed forms the vector
+    should contain lexemes produced by <literal>simple</literal> for exact form
+    of the word as well as lexemes produced by linguistic-aware dictionary
+    (e.g. <literal>english_stem</literal>) for processed forms.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml
index 4dc52ec983..049c3fcff6 100644
--- a/doc/src/sgml/textsearch.sgml
+++ b/doc/src/sgml/textsearch.sgml
@@ -732,10 +732,11 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     The <function>to_tsvector</function> function internally calls a parser
     which breaks the document text into tokens and assigns a type to
     each token.  For each token, a list of
-    dictionaries (<xref linkend="textsearch-dictionaries"/>) is consulted,
-    where the list can vary depending on the token type.  The first dictionary
-    that <firstterm>recognizes</firstterm> the token emits one or more normalized
-    <firstterm>lexemes</firstterm> to represent the token.  For example,
+    condition/command pairs is consulted, where the list can vary depending
+    on the token type, condition and command are expressions on dictionaries
+    with matching clause in condition(<xref linkend="textsearch-dictionaries"/>).
+    The first command combined with true-resulted condition emits one or more normalized
+    <firstterm>lexemes</firstterm> to represent the token. For example,
     <literal>rats</literal> became <literal>rat</literal> because one of the
     dictionaries recognized that the word <literal>rats</literal> is a plural
     form of <literal>rat</literal>.  Some words are recognized as
@@ -743,7 +744,7 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     causes them to be ignored since they occur too frequently to be useful in
     searching.  In our example these are
     <literal>a</literal>, <literal>on</literal>, and <literal>it</literal>.
-    If no dictionary in the list recognizes the token then it is also ignored.
+    If none of conditions is <literal>true</literal> the token is ignored.
     In this example that happened to the punctuation sign <literal>-</literal>
     because there are in fact no dictionaries assigned for its token type
     (<literal>Space symbols</literal>), meaning space tokens will never be
@@ -2227,14 +2228,6 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
       (notice that one token can produce more than one lexeme)
      </para>
     </listitem>
-    <listitem>
-     <para>
-      a single lexeme with the <literal>TSL_FILTER</literal> flag set, to replace
-      the original token with a new token to be passed to subsequent
-      dictionaries (a dictionary that does this is called a
-      <firstterm>filtering dictionary</firstterm>)
-     </para>
-    </listitem>
     <listitem>
      <para>
       an empty array if the dictionary knows the token, but it is a stop word
@@ -2264,38 +2257,126 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
    type that the parser can return, a separate list of dictionaries is
    specified by the configuration.  When a token of that type is found
    by the parser, each dictionary in the list is consulted in turn,
-   until some dictionary recognizes it as a known word.  If it is identified
-   as a stop word, or if no dictionary recognizes the token, it will be
-   discarded and not indexed or searched for.
-   Normally, the first dictionary that returns a non-<literal>NULL</literal>
-   output determines the result, and any remaining dictionaries are not
-   consulted; but a filtering dictionary can replace the given word
-   with a modified word, which is then passed to subsequent dictionaries.
+   until command is not selected based on its condition. If none of cases is
+   selected token will be discarded and not indexed or searched for.
+  </para>
+
+  <para>
+   A tree of cases is described as condition/command/else triples. Each
+   condition is evaluated in order to select appropriate command to generate
+   resulted set of lexemes.
+  </para>
+
+  <para>
+   A condition is an expression with dictionaries used as operands and
+   basic set operators <literal>UNION</literal>, <literal>EXCEPT</literal>, <literal>INTERSECT</literal>
+   and special operator <literal>MAP</literal>.
+   Special operator <literal>MAP</literal> use output of left subexpression as
+   input for right subexpression.
+  </para>
+
+  <para>
+    Rules to write command are same as for condition with additional keyword
+    <literal>KEEP</literal> considered to use the result of the condition as an output.
+  </para>
+
+  <para>
+   A comma-separated list of dictionaries is a simplified variant of text
+   search configuration. Each dictionary consulted to process a token and first
+   non-<literal>NULL</literal> output is accepted as a processing result.
   </para>
 
   <para>
-   The general rule for configuring a list of dictionaries
-   is to place first the most narrow, most specific dictionary, then the more
-   general dictionaries, finishing with a very general dictionary, like
+   The general rule for configuring tokens processing
+   is to place first case with the most narrow, most specific dictionary, then the more
+   general dictionaries, finishing with a very general dictionaries, like
    a <application>Snowball</application> stemmer or <literal>simple</literal>, which
-   recognizes everything.  For example, for an astronomy-specific search
+   recognizes everything. For example, for an astronomy-specific search
    (<literal>astro_en</literal> configuration) one could bind token type
    <type>asciiword</type> (ASCII word) to a synonym dictionary of astronomical
    terms, a general English dictionary and a <application>Snowball</application> English
-   stemmer:
+   stemmer in comma-separated variant of mapping:
+  </para>
 
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION astro_en
     ADD MAPPING FOR asciiword WITH astrosyn, english_ispell, english_stem;
 </programlisting>
+
+  <para>
+   Another example is a configuration for both English and German languages via
+   operator-separated variant of mapping:
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION multi_en_de
+    ADD MAPPING FOR asciiword, word WITH
+        CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+         UNION
+        CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+   This configuration provides an ability to search on collection of multilingual
+   documents without specifying language:
+  </para>
+
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'lack');
+ id |                   txt
+----+-----------------------------------------
+  2 | with old stars and lacking gas and dust
+
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'beginnen');
+ id |                 txt
+----+-------------------------------------
+  1 | Das geschah zu Beginn dieses Monats
+</programlisting>
+
+  <para>
+   A combination of stemmer dictionary with <literal>simple</literal> one may be used to mix
+   search for exact form of one word and linguistic search for others.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION exact_and_linguistic
+    ADD MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
+  <para>
+   In the following example a simple dictionary is used to prevent words from normalization in query.
   </para>
 
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Supernova star'),
+                              (2, 'Supernova stars'))
+SELECT * FROM docs WHERE to_tsvector('exact_and_linguistic', txt) @@ (to_tsquery('simple', 'stars') &amp;&amp; to_tsquery('english', 'supernovae'));
+ id |       txt       
+----+-----------------
+  2 | Supernova stars
+</programlisting>
+
+   <caution>
+    <para>
+     Due to lack of information about origin of each lexeme in <literal>tsvector</literal> may
+     lead to false-positive triggers in case of stemmed form being used as exact form in a query.
+    </para>
+   </caution>
+
   <para>
-   A filtering dictionary can be placed anywhere in the list, except at the
-   end where it'd be useless.  Filtering dictionaries are useful to partially
+   Filtering dictionaries are useful to partially
    normalize words to simplify the task of later dictionaries.  For example,
    a filtering dictionary could be used to remove accents from accented
    letters, as is done by the <xref linkend="unaccent"/> module.
+   Filter dictionary should be placed at left of <literal>MAP</literal>
+   operator. If filter dictionary returns <literal>NULL</literal> it pass initial token
+   to the right subexpression.
   </para>
 
   <sect2 id="textsearch-stopwords">
@@ -2462,9 +2543,9 @@ SELECT ts_lexize('public.simple_dict','The');
 
 <screen>
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | Paris | {english_stem} | english_stem  | english_stem | {pari}
 
 CREATE TEXT SEARCH DICTIONARY my_synonym (
     TEMPLATE = synonym,
@@ -2476,9 +2557,12 @@ ALTER TEXT SEARCH CONFIGURATION english
     WITH my_synonym, english_stem;
 
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |       dictionaries        | dictionary | lexemes 
------------+-----------------+-------+---------------------------+------------+---------
- asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris}
+   alias   |   description   | token |       dictionaries        |                configuration                |  command   | lexemes 
+-----------+-----------------+-------+---------------------------+---------------------------------------------+------------+---------
+ asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | CASE my_synonym WHEN MATCH THEN KEEP       +| my_synonym | {paris}
+           |                 |       |                           | ELSE CASE english_stem WHEN MATCH THEN KEEP+|            | 
+           |                 |       |                           | END                                        +|            | 
+           |                 |       |                           | END                                         |            | 
 </screen>
    </para>
 
@@ -3103,6 +3187,21 @@ CREATE TEXT SEARCH DICTIONARY english_ispell (
     Now we can set up the mappings for words in configuration
     <literal>pg</literal>:
 
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION pg
+    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
+                      word, hword, hword_part
+    WITH 
+      CASE pg_dict WHEN MATCH THEN KEEP
+      ELSE
+          CASE english_ispell WHEN MATCH THEN KEEP
+          ELSE english_stem
+          END
+      END;
+</programlisting>
+
+    Or use alternative comma-separated syntax:
+
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION pg
     ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
@@ -3182,7 +3281,8 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
          OUT <replaceable class="parameter">description</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">token</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">dictionaries</replaceable> <type>regdictionary[]</type>,
-         OUT <replaceable class="parameter">dictionary</replaceable> <type>regdictionary</type>,
+         OUT <replaceable class="parameter">configuration</replaceable> <type>text</type>,
+         OUT <replaceable class="parameter">command</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)
          returns setof record
 </synopsis>
@@ -3226,14 +3326,20 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionary</replaceable> <type>regdictionary</type> &mdash; the dictionary
-       that recognized the token, or <literal>NULL</literal> if none did
+       <replaceable>configuration</replaceable> <type>text</type> &mdash; the
+       configuration defined for this token type
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       <replaceable>command</replaceable> <type>text</type> &mdash; the command that describes
+       the way the output was produced
       </para>
      </listitem>
      <listitem>
       <para>
        <replaceable>lexemes</replaceable> <type>text[]</type> &mdash; the lexeme(s) produced
-       by the dictionary that recognized the token, or <literal>NULL</literal> if
+       by the command selected according conditions, or <literal>NULL</literal> if
        none did; an empty array (<literal>{}</literal>) means it was recognized as a
        stop word
       </para>
@@ -3246,32 +3352,32 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
 
 <screen>
 SELECT * FROM ts_debug('english','a fat  cat sat on a mat - it ate a fat rats');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | cat   | {english_stem} | english_stem | {cat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | sat   | {english_stem} | english_stem | {sat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | on    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | mat   | {english_stem} | english_stem | {mat}
- blank     | Space symbols   |       | {}             |              | 
- blank     | Space symbols   | -     | {}             |              | 
- asciiword | Word, all ASCII | it    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | ate   | {english_stem} | english_stem | {ate}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | rats  | {english_stem} | english_stem | {rat}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | cat   | {english_stem} | english_stem  | english_stem | {cat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | sat   | {english_stem} | english_stem  | english_stem | {sat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | on    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | mat   | {english_stem} | english_stem  | english_stem | {mat}
+ blank     | Space symbols   |       |                |               |              | 
+ blank     | Space symbols   | -     |                |               |              | 
+ asciiword | Word, all ASCII | it    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | ate   | {english_stem} | english_stem  | english_stem | {ate}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | rats  | {english_stem} | english_stem  | english_stem | {rat}
 </screen>
   </para>
 
@@ -3297,13 +3403,22 @@ ALTER TEXT SEARCH CONFIGURATION public.english
 
 <screen>
 SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
-   alias   |   description   |    token    |         dictionaries          |   dictionary   |   lexemes   
------------+-----------------+-------------+-------------------------------+----------------+-------------
- asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | english_ispell | {}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | english_ispell | {bright}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | english_stem   | {supernova}
+   alias   |   description   |    token    |         dictionaries          |                configuration                |     command      |   lexemes   
+-----------+-----------------+-------------+-------------------------------+---------------------------------------------+------------------+-------------
+ asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {bright}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_stem     | {supernova}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
 </screen>
 
   <para>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 394aea8e0f..4806e0b9fc 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -944,55 +944,14 @@ GRANT SELECT (subdbid, subname, subowner, subenabled, subslotname, subpublicatio
 -- Tsearch debug function.  Defined here because it'd be pretty unwieldy
 -- to put it into pg_proc.h
 
-CREATE FUNCTION ts_debug(IN config regconfig, IN document text,
-    OUT alias text,
-    OUT description text,
-    OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
-    OUT lexemes text[])
-RETURNS SETOF record AS
-$$
-SELECT
-    tt.alias AS alias,
-    tt.description AS description,
-    parse.token AS token,
-    ARRAY ( SELECT m.mapdict::pg_catalog.regdictionary
-            FROM pg_catalog.pg_ts_config_map AS m
-            WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-            ORDER BY m.mapseqno )
-    AS dictionaries,
-    ( SELECT mapdict::pg_catalog.regdictionary
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS dictionary,
-    ( SELECT pg_catalog.ts_lexize(mapdict, parse.token)
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS lexemes
-FROM pg_catalog.ts_parse(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 ), $2
-    ) AS parse,
-     pg_catalog.ts_token_type(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 )
-    ) AS tt
-WHERE tt.tokid = parse.tokid
-$$
-LANGUAGE SQL STRICT STABLE PARALLEL SAFE;
-
-COMMENT ON FUNCTION ts_debug(regconfig,text) IS
-    'debug function for text search configuration';
 
 CREATE FUNCTION ts_debug(IN document text,
     OUT alias text,
     OUT description text,
     OUT token text,
     OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
+    OUT configuration text,
+    OUT command text,
     OUT lexemes text[])
 RETURNS SETOF record AS
 $$
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index adc7cd67a7..e74b68f1e1 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -39,9 +39,12 @@
 #include "nodes/makefuncs.h"
 #include "parser/parse_func.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_public.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/jsonb.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "utils/syscache.h"
@@ -935,11 +938,22 @@ makeConfigurationDependencies(HeapTuple tuple, bool removeOld,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			TSMapElement *mapdicts = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			Oid		   *dictionaryOids = TSMapGetDictionaries(mapdicts);
+			Oid		   *currentOid = dictionaryOids;
 
-			referenced.classId = TSDictionaryRelationId;
-			referenced.objectId = cfgmap->mapdict;
-			referenced.objectSubId = 0;
-			add_exact_object_address(&referenced, addrs);
+			while (*currentOid != InvalidOid)
+			{
+				referenced.classId = TSDictionaryRelationId;
+				referenced.objectId = *currentOid;
+				referenced.objectSubId = 0;
+				add_exact_object_address(&referenced, addrs);
+
+				currentOid++;
+			}
+
+			pfree(dictionaryOids);
+			TSMapElementFree(mapdicts);
 		}
 
 		systable_endscan(scan);
@@ -1091,8 +1105,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			mapvalues[Anum_pg_ts_config_map_mapcfg - 1] = cfgOid;
 			mapvalues[Anum_pg_ts_config_map_maptokentype - 1] = cfgmap->maptokentype;
-			mapvalues[Anum_pg_ts_config_map_mapseqno - 1] = cfgmap->mapseqno;
-			mapvalues[Anum_pg_ts_config_map_mapdict - 1] = cfgmap->mapdict;
+			mapvalues[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(&cfgmap->mapdicts);
 
 			newmaptup = heap_form_tuple(mapRel->rd_att, mapvalues, mapnulls);
 
@@ -1195,7 +1208,7 @@ AlterTSConfiguration(AlterTSConfigurationStmt *stmt)
 	relMap = heap_open(TSConfigMapRelationId, RowExclusiveLock);
 
 	/* Add or drop mappings */
-	if (stmt->dicts)
+	if (stmt->dicts || stmt->dict_map)
 		MakeConfigurationMapping(stmt, tup, relMap);
 	else if (stmt->tokentype)
 		DropConfigurationMapping(stmt, tup, relMap);
@@ -1271,6 +1284,108 @@ getTokenTypes(Oid prsId, List *tokennames)
 	return res;
 }
 
+static TSMapElement *
+CreateCaseForSingleDictionary(Oid dictOid)
+{
+	TSMapElement *result = palloc0(sizeof(TSMapElement));
+	TSMapElement *keepElement = palloc0(sizeof(TSMapElement));
+	TSMapElement *condition = palloc0(sizeof(TSMapElement));
+	TSMapCase  *caseObject = palloc0(sizeof(TSMapCase));
+
+	keepElement->type = TSMAP_KEEP;
+	keepElement->parent = result;
+	caseObject->command = keepElement;
+	caseObject->match = true;
+
+	condition->type = TSMAP_DICTIONARY;
+	condition->parent = result;
+	condition->value.objectDictionary = dictOid;
+	caseObject->condition = condition;
+
+	result->value.objectCase = caseObject;
+	result->type = TSMAP_CASE;
+
+	return result;
+}
+
+static TSMapElement *
+ParseTSMapConfig(DictMapElem *elem)
+{
+	TSMapElement *result = palloc0(sizeof(TSMapElement));
+
+	if (elem->kind == DICT_MAP_CASE)
+	{
+		TSMapCase  *caseObject = palloc0(sizeof(TSMapCase));
+		DictMapCase *caseASTObject = elem->data;
+
+		caseObject->condition = ParseTSMapConfig(caseASTObject->condition);
+		caseObject->command = ParseTSMapConfig(caseASTObject->command);
+
+		if (caseASTObject->elsebranch)
+			caseObject->elsebranch = ParseTSMapConfig(caseASTObject->elsebranch);
+
+		caseObject->match = caseASTObject->match;
+
+		caseObject->condition->parent = result;
+		caseObject->command->parent = result;
+
+		result->type = TSMAP_CASE;
+		result->value.objectCase = caseObject;
+	}
+	else if (elem->kind == DICT_MAP_EXPRESSION)
+	{
+		TSMapExpression *expression = palloc0(sizeof(TSMapExpression));
+		DictMapExprElem *expressionAST = elem->data;
+
+		expression->left = ParseTSMapConfig(expressionAST->left);
+		expression->right = ParseTSMapConfig(expressionAST->right);
+		expression->operator = expressionAST->oper;
+
+		result->type = TSMAP_EXPRESSION;
+		result->value.objectExpression = expression;
+	}
+	else if (elem->kind == DICT_MAP_KEEP)
+	{
+		result->value.objectExpression = NULL;
+		result->type = TSMAP_KEEP;
+	}
+	else if (elem->kind == DICT_MAP_DICTIONARY)
+	{
+		result->value.objectDictionary = get_ts_dict_oid(elem->data, false);
+		result->type = TSMAP_DICTIONARY;
+	}
+	else if (elem->kind == DICT_MAP_DICTIONARY_LIST)
+	{
+		int			i = 0;
+		ListCell   *c;
+		TSMapElement *root = NULL;
+		TSMapElement *currentNode = NULL;
+
+		foreach(c, (List *) elem->data)
+		{
+			TSMapElement *prevNode = currentNode;
+			List	   *names = (List *) lfirst(c);
+			Oid			oid = get_ts_dict_oid(names, false);
+
+			currentNode = CreateCaseForSingleDictionary(oid);
+
+			if (root == NULL)
+				root = currentNode;
+			else
+			{
+				prevNode->value.objectCase->elsebranch = currentNode;
+				currentNode->parent = prevNode;
+			}
+
+			prevNode = currentNode;
+
+			i++;
+		}
+		result = root;
+	}
+	return result;
+}
+
 /*
  * ALTER TEXT SEARCH CONFIGURATION ADD/ALTER MAPPING
  */
@@ -1287,8 +1402,9 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	Oid			prsId;
 	int		   *tokens,
 				ntoken;
-	Oid		   *dictIds;
-	int			ndict;
+	Oid		   *dictIds = NULL;
+	int			ndict = 0;
+	TSMapElement *config = NULL;
 	ListCell   *c;
 
 	prsId = ((Form_pg_ts_config) GETSTRUCT(tup))->cfgparser;
@@ -1327,15 +1443,18 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	/*
 	 * Convert list of dictionary names to array of dict OIDs
 	 */
-	ndict = list_length(stmt->dicts);
-	dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
-	i = 0;
-	foreach(c, stmt->dicts)
+	if (stmt->dicts)
 	{
-		List	   *names = (List *) lfirst(c);
+		ndict = list_length(stmt->dicts);
+		dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
+		i = 0;
+		foreach(c, stmt->dicts)
+		{
+			List	   *names = (List *) lfirst(c);
 
-		dictIds[i] = get_ts_dict_oid(names, false);
-		i++;
+			dictIds[i] = get_ts_dict_oid(names, false);
+			i++;
+		}
 	}
 
 	if (stmt->replace)
@@ -1357,6 +1476,10 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			Datum		repl_val[Natts_pg_ts_config_map];
+			bool		repl_null[Natts_pg_ts_config_map];
+			bool		repl_repl[Natts_pg_ts_config_map];
+			HeapTuple	newtup;
 
 			/*
 			 * check if it's one of target token types
@@ -1380,25 +1503,21 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 			/*
 			 * replace dictionary if match
 			 */
-			if (cfgmap->mapdict == dictOld)
-			{
-				Datum		repl_val[Natts_pg_ts_config_map];
-				bool		repl_null[Natts_pg_ts_config_map];
-				bool		repl_repl[Natts_pg_ts_config_map];
-				HeapTuple	newtup;
-
-				memset(repl_val, 0, sizeof(repl_val));
-				memset(repl_null, false, sizeof(repl_null));
-				memset(repl_repl, false, sizeof(repl_repl));
-
-				repl_val[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictNew);
-				repl_repl[Anum_pg_ts_config_map_mapdict - 1] = true;
-
-				newtup = heap_modify_tuple(maptup,
-										   RelationGetDescr(relMap),
-										   repl_val, repl_null, repl_repl);
-				CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
-			}
+			config = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			TSMapReplaceDictionary(config, dictOld, dictNew);
+
+			memset(repl_val, 0, sizeof(repl_val));
+			memset(repl_null, false, sizeof(repl_null));
+			memset(repl_repl, false, sizeof(repl_repl));
+
+			repl_val[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
+			repl_repl[Anum_pg_ts_config_map_mapdicts - 1] = true;
+
+			newtup = heap_modify_tuple(maptup,
+									   RelationGetDescr(relMap),
+									   repl_val, repl_null, repl_repl);
+			CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
+			pfree(config);
 		}
 
 		systable_endscan(scan);
@@ -1408,24 +1527,22 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		/*
 		 * Insertion of new entries
 		 */
+		config = ParseTSMapConfig(stmt->dict_map);
+
 		for (i = 0; i < ntoken; i++)
 		{
-			for (j = 0; j < ndict; j++)
-			{
-				Datum		values[Natts_pg_ts_config_map];
-				bool		nulls[Natts_pg_ts_config_map];
+			Datum		values[Natts_pg_ts_config_map];
+			bool		nulls[Natts_pg_ts_config_map];
 
-				memset(nulls, false, sizeof(nulls));
-				values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
-				values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
-				values[Anum_pg_ts_config_map_mapseqno - 1] = Int32GetDatum(j + 1);
-				values[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictIds[j]);
+			memset(nulls, false, sizeof(nulls));
+			values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
+			values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
+			values[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
 
-				tup = heap_form_tuple(relMap->rd_att, values, nulls);
-				CatalogTupleInsert(relMap, tup);
+			tup = heap_form_tuple(relMap->rd_att, values, nulls);
+			CatalogTupleInsert(relMap, tup);
 
-				heap_freetuple(tup);
-			}
+			heap_freetuple(tup);
 		}
 	}
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 84d717102d..3e5d19c5e2 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4387,6 +4387,42 @@ _copyReassignOwnedStmt(const ReassignOwnedStmt *from)
 	return newnode;
 }
 
+static DictMapElem *
+_copyDictMapElem(const DictMapElem *from)
+{
+	DictMapElem *newnode = makeNode(DictMapElem);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(data);
+
+	return newnode;
+}
+
+static DictMapExprElem *
+_copyDictMapExprElem(const DictMapExprElem *from)
+{
+	DictMapExprElem *newnode = makeNode(DictMapExprElem);
+
+	COPY_NODE_FIELD(left);
+	COPY_NODE_FIELD(right);
+	COPY_SCALAR_FIELD(oper);
+
+	return newnode;
+}
+
+static DictMapCase *
+_copyDictMapCase(const DictMapCase *from)
+{
+	DictMapCase *newnode = makeNode(DictMapCase);
+
+	COPY_NODE_FIELD(condition);
+	COPY_NODE_FIELD(command);
+	COPY_NODE_FIELD(elsebranch);
+	COPY_SCALAR_FIELD(match);
+
+	return newnode;
+}
+
 static AlterTSDictionaryStmt *
 _copyAlterTSDictionaryStmt(const AlterTSDictionaryStmt *from)
 {
@@ -5394,6 +5430,15 @@ copyObjectImpl(const void *from)
 		case T_ReassignOwnedStmt:
 			retval = _copyReassignOwnedStmt(from);
 			break;
+		case T_DictMapExprElem:
+			retval = _copyDictMapExprElem(from);
+			break;
+		case T_DictMapElem:
+			retval = _copyDictMapElem(from);
+			break;
+		case T_DictMapCase:
+			retval = _copyDictMapCase(from);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _copyAlterTSDictionaryStmt(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 2e869a9d5d..05a056b61d 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2186,6 +2186,36 @@ _equalReassignOwnedStmt(const ReassignOwnedStmt *a, const ReassignOwnedStmt *b)
 	return true;
 }
 
+static bool
+_equalDictMapElem(const DictMapElem *a, const DictMapElem *b)
+{
+	COMPARE_NODE_FIELD(data);
+	COMPARE_SCALAR_FIELD(kind);
+
+	return true;
+}
+
+static bool
+_equalDictMapExprElem(const DictMapExprElem *a, const DictMapExprElem *b)
+{
+	COMPARE_NODE_FIELD(left);
+	COMPARE_NODE_FIELD(right);
+	COMPARE_SCALAR_FIELD(oper);
+
+	return true;
+}
+
+static bool
+_equalDictMapCase(const DictMapCase *a, const DictMapCase *b)
+{
+	COMPARE_NODE_FIELD(condition);
+	COMPARE_NODE_FIELD(command);
+	COMPARE_NODE_FIELD(elsebranch);
+	COMPARE_SCALAR_FIELD(match);
+
+	return true;
+}
+
 static bool
 _equalAlterTSDictionaryStmt(const AlterTSDictionaryStmt *a, const AlterTSDictionaryStmt *b)
 {
@@ -3532,6 +3562,15 @@ equal(const void *a, const void *b)
 		case T_ReassignOwnedStmt:
 			retval = _equalReassignOwnedStmt(a, b);
 			break;
+		case T_DictMapExprElem:
+			retval = _equalDictMapExprElem(a, b);
+			break;
+		case T_DictMapElem:
+			retval = _equalDictMapElem(a, b);
+			break;
+		case T_DictMapCase:
+			retval = _equalDictMapCase(a, b);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _equalAlterTSDictionaryStmt(a, b);
 			break;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index ebfc94f896..3ab0b75ece 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -52,6 +52,7 @@
 #include "catalog/namespace.h"
 #include "catalog/pg_am.h"
 #include "catalog/pg_trigger.h"
+#include "catalog/pg_ts_config_map.h"
 #include "commands/defrem.h"
 #include "commands/trigger.h"
 #include "nodes/makefuncs.h"
@@ -241,6 +242,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionSpec		*partspec;
 	PartitionBoundSpec	*partboundspec;
 	RoleSpec			*rolespec;
+	DictMapElem			*dmapelem;
 }
 
 %type <node>	stmt schema_stmt
@@ -308,7 +310,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <ival>	vacuum_option_list vacuum_option_elem
 %type <boolean>	opt_or_replace
 				opt_grant_grant_option opt_grant_admin_option
-				opt_nowait opt_if_exists opt_with_data
+				opt_nowait opt_if_exists opt_with_data opt_dictionary_map_no
 %type <ival>	opt_nowait_or_skip
 
 %type <list>	OptRoleList AlterOptRoleList
@@ -396,8 +398,8 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				relation_expr_list dostmt_opt_list
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
-				publication_name_list
 				vacuum_relation_list opt_vacuum_relation_list
+				publication_name_list
 
 %type <list>	group_by_list
 %type <node>	group_by_item empty_grouping_set rollup_clause cube_clause
@@ -582,6 +584,12 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>		hash_partbound partbound_datum_list range_datum_list
 %type <defelt>		hash_partbound_elem
 
+%type <ival>		dictionary_map_set_expr_operator
+%type <dmapelem>	dictionary_map_dict dictionary_map_command_expr_paren
+					dictionary_map_set_expr dictionary_map_case
+					dictionary_map_action dictionary_map
+					opt_dictionary_map_case_else dictionary_config
+
 /*
  * Non-keyword token types.  These are hard-wired into the "flex" lexer.
  * They must be listed first so that their numeric codes do not depend on
@@ -643,13 +651,14 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	JOIN
 
-	KEY
+	KEEP KEY
 
 	LABEL LANGUAGE LARGE_P LAST_P LATERAL_P
 	LEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL
 	LOCALTIME LOCALTIMESTAMP LOCATION LOCK_P LOCKED LOGGED
 
-	MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE MONTH_P MOVE
+	MAP MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE
+	MONTH_P MOVE
 
 	NAME_P NAMES NATIONAL NATURAL NCHAR NEW NEXT NO NONE
 	NOT NOTHING NOTIFY NOTNULL NOWAIT NULL_P NULLIF
@@ -10318,24 +10327,26 @@ AlterTSDictionaryStmt:
 		;
 
 AlterTSConfigurationStmt:
-			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with any_name_list
+			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ADD_MAPPING;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = false;
 					n->replace = false;
 					$$ = (Node*)n;
 				}
-			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with any_name_list
+			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ALTER_MAPPING_FOR_TOKEN;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = true;
 					n->replace = false;
 					$$ = (Node*)n;
@@ -10387,6 +10398,111 @@ any_with:	WITH									{}
 			| WITH_LA								{}
 		;
 
+opt_dictionary_map_no:
+			NO { $$ = true; }
+			| { $$ = false; }
+		;
+
+dictionary_config:
+			dictionary_map { $$ = $1; }
+			| any_name_list ',' any_name
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_DICTIONARY_LIST;
+				n->data = lappend($1, $3);
+				$$ = n;
+			}
+		;
+
+dictionary_map:
+			dictionary_map_case { $$ = $1; }
+			| dictionary_map_set_expr { $$ = $1; }
+		;
+
+dictionary_map_action:
+			KEEP
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_KEEP;
+				n->data = NULL;
+				$$ = n;
+			}
+			| dictionary_map { $$ = $1; }
+		;
+
+opt_dictionary_map_case_else:
+			ELSE dictionary_map { $$ = $2; }
+			| { $$ = NULL; }
+		;
+
+dictionary_map_case:
+			CASE dictionary_map WHEN opt_dictionary_map_no MATCH THEN dictionary_map_action opt_dictionary_map_case_else END_P
+			{
+				DictMapCase *n = makeNode(DictMapCase);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->condition = $2;
+				n->command = $7;
+				n->elsebranch = $8;
+				n->match = !$4;
+
+				r->kind = DICT_MAP_CASE;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_set_expr_operator:
+			UNION { $$ = TSMAP_OP_UNION; }
+			| EXCEPT { $$ = TSMAP_OP_EXCEPT; }
+			| INTERSECT { $$ = TSMAP_OP_INTERSECT; }
+			| MAP { $$ = TSMAP_OP_MAP; }
+		;
+
+dictionary_map_set_expr:
+			dictionary_map_command_expr_paren { $$ = $1; }
+			| dictionary_map_case dictionary_map_set_expr_operator dictionary_map_case
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = $2;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+			| dictionary_map_command_expr_paren dictionary_map_set_expr_operator dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = $2;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_command_expr_paren:
+			'(' dictionary_map_set_expr ')'	{ $$ = $2; }
+			| dictionary_map_dict			{ $$ = $1; }
+		;
+
+dictionary_map_dict:
+			any_name
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_DICTIONARY;
+				n->data = $1;
+				$$ = n;
+			}
+		;
 
 /*****************************************************************************
  *
@@ -15042,6 +15158,7 @@ unreserved_keyword:
 			| LOCK_P
 			| LOCKED
 			| LOGGED
+			| MAP
 			| MAPPING
 			| MATCH
 			| MATERIALIZED
@@ -15346,6 +15463,7 @@ reserved_keyword:
 			| INITIALLY
 			| INTERSECT
 			| INTO
+			| KEEP
 			| LATERAL_P
 			| LEADING
 			| LIMIT
diff --git a/src/backend/tsearch/Makefile b/src/backend/tsearch/Makefile
index 34fe4c5b3c..24e47f20f4 100644
--- a/src/backend/tsearch/Makefile
+++ b/src/backend/tsearch/Makefile
@@ -26,7 +26,7 @@ DICTFILES_PATH=$(addprefix dicts/,$(DICTFILES))
 OBJS = ts_locale.o ts_parse.o wparser.o wparser_def.o dict.o \
 	dict_simple.o dict_synonym.o dict_thesaurus.o \
 	dict_ispell.o regis.o spell.o \
-	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o
+	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o ts_configmap.o
 
 include $(top_srcdir)/src/backend/common.mk
 
diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c
index ad5dddff4b..2b3caf95dd 100644
--- a/src/backend/tsearch/ts_parse.c
+++ b/src/backend/tsearch/ts_parse.c
@@ -16,19 +16,30 @@
 
 #include "tsearch/ts_cache.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "funcapi.h"
 
 #define IGNORE_LONGLEXEME	1
 
-/*
+/*-------------------
  * Lexize subsystem
+ *-------------------
  */
 
 typedef struct ParsedLex
 {
-	int			type;
-	char	   *lemm;
-	int			lenlemm;
-	struct ParsedLex *next;
+	int			type;			/* Token type */
+	char	   *lemm;			/* Token itself */
+	int			lenlemm;		/* Length of the token string */
+	int			maplen;			/* Length of the map */
+	bool	   *accepted;		/* Is accepted by some dictionary */
+	bool	   *rejected;		/* Is rejected by all dictionaries */
+	bool	   *notFinished;	/* Some dictionary not finished processing and
+								 * waits for more tokens */
+	struct ParsedLex *next;		/* Next token in the list */
+	TSMapElement *relatedRule;	/* Rule which is used to produce lexemes from
+								 * the token */
 } ParsedLex;
 
 typedef struct ListParsedLex
@@ -37,37 +48,98 @@ typedef struct ListParsedLex
 	ParsedLex  *tail;
 } ListParsedLex;
 
-typedef struct
+typedef struct DictState
 {
-	TSConfigCacheEntry *cfg;
-	Oid			curDictId;
-	int			posDict;
-	DictSubState dictState;
-	ParsedLex  *curSub;
-	ListParsedLex towork;		/* current list to work */
-	ListParsedLex waste;		/* list of lexemes that already lexized */
+	Oid			relatedDictionary;	/* DictState contains state of dictionary
+									 * with this Oid */
+	DictSubState subState;		/* Internal state of the dictionary used to
+								 * store some state between dictionary calls */
+	ListParsedLex acceptedTokens;	/* Tokens which are processed and
+									 * accepted, used in last returned result
+									 * by the dictionary */
+	ListParsedLex intermediateTokens;	/* Tokens which are not accepted, but
+										 * were processed by thesaurus-like
+										 * dictionary */
+	bool		storeToAccepted;	/* Should current token be appended to
+									 * accepted or intermediate tokens */
+	bool		processed;		/* Is the dictionary take control during
+								 * current token processing */
+	TSLexeme   *tmpResult;		/* Last result retued by thesaurus-like
+								 * dictionary, if dictionary still waiting for
+								 * more lexemes */
+} DictState;
+
+typedef struct DictStateList
+{
+	int			listLength;
+	DictState  *states;
+} DictStateList;
 
-	/*
-	 * fields to store last variant to lexize (basically, thesaurus or similar
-	 * to, which wants	several lexemes
-	 */
+typedef struct LexemesBufferEntry
+{
+	Oid			dictId;
+	TSMapElement *key;
+	ParsedLex  *token;
+	TSLexeme   *data;
+} LexemesBufferEntry;
 
-	ParsedLex  *lastRes;
-	TSLexeme   *tmpRes;
+typedef struct LexemesBuffer
+{
+	int			size;
+	LexemesBufferEntry *data;
+} LexemesBuffer;
+
+typedef struct ResultStorage
+{
+	TSLexeme   *lexemes;		/* Processed lexemes, which is not yet
+								 * accepted */
+	TSLexeme   *accepted;
+} ResultStorage;
+
+typedef struct LexizeData
+{
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	DictStateList dslist;		/* List of all currently stored states of
+								 * dictionaries */
+	ListParsedLex towork;		/* Current list to work */
+	ListParsedLex waste;		/* List of lexemes that already lexized */
+	LexemesBuffer buffer;		/* Buffer of processed lexemes. Used to avoid
+								 * multiple execution of token lexize process
+								 * with same parameters */
+	ResultStorage delayedResults;	/* Results that should be returned but may
+									 * be rejected in future */
+	Oid			skipDictionary; /* The dictionary we should skip during
+								 * processing. Used to avoid infinite loop in
+								 * configuration with phrase dictionary */
+	bool		debugContext;	/* If true, relatedRule attribute is filled */
 } LexizeData;
 
-static void
-LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+typedef struct TSDebugContext
 {
-	ld->cfg = cfg;
-	ld->curDictId = InvalidOid;
-	ld->posDict = 0;
-	ld->towork.head = ld->towork.tail = ld->curSub = NULL;
-	ld->waste.head = ld->waste.tail = NULL;
-	ld->lastRes = NULL;
-	ld->tmpRes = NULL;
-}
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	TSParserCacheEntry *prsobj; /* Parser context of current ts_debug context */
+	LexDescr   *tokenTypes;		/* Token types supported by current parser */
+	void	   *prsdata;		/* Parser data of current ts_debug context */
+	LexizeData	ldata;			/* Lexize data of current ts_debug context */
+	int			tokentype;		/* Last token tokentype */
+	TSLexeme   *savedLexemes;	/* Last token lexemes stored for ts_debug
+								 * output */
+	ParsedLex  *leftTokens;		/* Corresponded ParsedLex */
+} TSDebugContext;
+
+static TSLexeme *TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression);
+static TSLexeme *LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config);
+
+/*-------------------
+ * ListParsedLex API
+ *-------------------
+ */
 
+/*
+ * Add a ParsedLex to the end of the list
+ */
 static void
 LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 {
@@ -81,274 +153,1277 @@ LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 	newpl->next = NULL;
 }
 
-static ParsedLex *
-LPLRemoveHead(ListParsedLex *list)
-{
-	ParsedLex  *res = list->head;
+/*
+ * Add a copy of ParsedLex to the end of the list
+ */
+static void
+LPLAddTailCopy(ListParsedLex *list, ParsedLex *newpl)
+{
+	ParsedLex  *copy = palloc0(sizeof(ParsedLex));
+
+	copy->lenlemm = newpl->lenlemm;
+	copy->type = newpl->type;
+	copy->lemm = newpl->lemm;
+	copy->relatedRule = newpl->relatedRule;
+	copy->next = NULL;
+
+	if (list->tail)
+	{
+		list->tail->next = copy;
+		list->tail = copy;
+	}
+	else
+		list->head = list->tail = copy;
+}
+
+/*
+ * Remove the head of the list. Return pointer to detached head
+ */
+static ParsedLex *
+LPLRemoveHead(ListParsedLex *list)
+{
+	ParsedLex  *res = list->head;
+
+	if (list->head)
+		list->head = list->head->next;
+
+	if (list->head == NULL)
+		list->tail = NULL;
+
+	return res;
+}
+
+/*
+ * Remove all ParsedLex from the list
+ */
+static void
+LPLClear(ListParsedLex *list)
+{
+	ParsedLex  *tmp,
+			   *ptr = list->head;
+
+	while (ptr)
+	{
+		tmp = ptr->next;
+		pfree(ptr);
+		ptr = tmp;
+	}
+
+	list->head = list->tail = NULL;
+}
+
+/*-------------------
+ * LexizeData manipulation functions
+ *-------------------
+ */
+
+/*
+ * Initialize empty LexizeData object
+ */
+static void
+LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+{
+	ld->cfg = cfg;
+	ld->skipDictionary = InvalidOid;
+	ld->towork.head = ld->towork.tail = NULL;
+	ld->waste.head = ld->waste.tail = NULL;
+	ld->dslist.listLength = 0;
+	ld->dslist.states = NULL;
+	ld->buffer.size = 0;
+	ld->buffer.data = NULL;
+	ld->delayedResults.lexemes = NULL;
+	ld->delayedResults.accepted = NULL;
+}
+
+/*
+ * Add a token to the processing queue
+ */
+static void
+LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
+{
+	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+
+	newpl->type = type;
+	newpl->lemm = lemm;
+	newpl->lenlemm = lenlemm;
+	newpl->relatedRule = NULL;
+	LPLAddTail(&ld->towork, newpl);
+}
+
+/*
+ * Remove head of the processing queue
+ */
+static void
+RemoveHead(LexizeData *ld)
+{
+	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+}
+
+/*
+ * Set token corresponded to current lexeme
+ */
+static void
+setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+{
+	if (correspondLexem)
+		*correspondLexem = ld->waste.head;
+	else
+		LPLClear(&ld->waste);
+
+	ld->waste.head = ld->waste.tail = NULL;
+}
+
+/*-------------------
+ * DictState manipulation functions
+ *-------------------
+ */
+
+/*
+ * Get a state of dictionary based on its oid
+ */
+static DictState *
+DictStateListGet(DictStateList *list, Oid dictId)
+{
+	int			i;
+	DictState  *result = NULL;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			result = &list->states[i];
+
+	return result;
+}
+
+/*
+ * Remove a state of dictionary based on its oid
+ */
+static void
+DictStateListRemove(DictStateList *list, Oid dictId)
+{
+	int			i;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			break;
+
+	if (i != list->listLength)
+	{
+		memcpy(list->states + i, list->states + i + 1, sizeof(DictState) * (list->listLength - i - 1));
+		list->listLength--;
+		if (list->listLength == 0)
+			list->states = NULL;
+		else
+			list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	}
+}
+
+/*
+ * Insert a state of dictionary with specified oid
+ */
+static DictState *
+DictStateListAdd(DictStateList *list, DictState *state)
+{
+	DictStateListRemove(list, state->relatedDictionary);
+
+	list->listLength++;
+	if (list->states)
+		list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	else
+		list->states = palloc0(sizeof(DictState) * list->listLength);
+
+	memcpy(list->states + list->listLength - 1, state, sizeof(DictState));
+
+	return list->states + list->listLength - 1;
+}
+
+/*
+ * Remove states of all dictionaries
+ */
+static void
+DictStateListClear(DictStateList *list)
+{
+	list->listLength = 0;
+	if (list->states)
+		pfree(list->states);
+	list->states = NULL;
+}
+
+/*-------------------
+ * LexemesBuffer manipulation functions
+ *-------------------
+ */
+
+/*
+ * Check if there is a saved lexeme generated by specified TSMapElement
+ */
+static bool
+LexemesBufferContains(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			return true;
+
+	return false;
+}
+
+/*
+ * Get a saved lexeme generated by specified TSMapElement
+ */
+static TSLexeme *
+LexemesBufferGet(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+	TSLexeme   *result = NULL;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			result = buffer->data[i].data;
+
+	return result;
+}
+
+/*
+ * Remove a saved lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferRemove(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			break;
+
+	if (i != buffer->size)
+	{
+		memcpy(buffer->data + i, buffer->data + i + 1, sizeof(LexemesBufferEntry) * (buffer->size - i - 1));
+		buffer->size--;
+		if (buffer->size == 0)
+			buffer->data = NULL;
+		else
+			buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	}
+}
+
+/*
+ * Same a lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferAdd(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token, TSLexeme *data)
+{
+	LexemesBufferRemove(buffer, key, token);
+
+	buffer->size++;
+	if (buffer->data)
+		buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	else
+		buffer->data = palloc0(sizeof(LexemesBufferEntry) * buffer->size);
+
+	buffer->data[buffer->size - 1].token = token;
+	buffer->data[buffer->size - 1].key = key;
+	buffer->data[buffer->size - 1].data = data;
+}
+
+/*
+ * Remove all lexemes saved in a buffer
+ */
+static void
+LexemesBufferClear(LexemesBuffer *buffer)
+{
+	int			i;
+	bool	   *skipEntry = palloc0(sizeof(bool) * buffer->size);
+
+	for (i = 0; i < buffer->size; i++)
+	{
+		if (buffer->data[i].data != NULL && !skipEntry[i])
+		{
+			int			j;
+
+			for (j = 0; j < buffer->size; j++)
+				if (buffer->data[i].data == buffer->data[j].data)
+					skipEntry[j] = true;
+
+			pfree(buffer->data[i].data);
+		}
+	}
+
+	buffer->size = 0;
+	if (buffer->data)
+		pfree(buffer->data);
+	buffer->data = NULL;
+}
+
+/*-------------------
+ * TSLexeme util functions
+ *-------------------
+ */
+
+/*
+ * Get size of TSLexeme except empty-lexeme
+ */
+static int
+TSLexemeGetSize(TSLexeme *lex)
+{
+	int			result = 0;
+	TSLexeme   *ptr = lex;
+
+	while (ptr && ptr->lexeme)
+	{
+		result++;
+		ptr++;
+	}
+
+	return result;
+}
+
+/*
+ * Remove repeated lexemes. Also remove copies of whole nvariant groups.
+ */
+static TSLexeme *
+TSLexemeRemoveDuplications(TSLexeme *lexeme)
+{
+	TSLexeme   *res;
+	int			curLexIndex;
+	int			i;
+	int			lexemeSize = TSLexemeGetSize(lexeme);
+	int			shouldCopyCount = lexemeSize;
+	bool	   *shouldCopy;
+
+	if (lexeme == NULL)
+		return NULL;
+
+	shouldCopy = palloc(sizeof(bool) * lexemeSize);
+	memset(shouldCopy, true, sizeof(bool) * lexemeSize);
+
+	for (curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		for (i = curLexIndex + 1; i < lexemeSize; i++)
+		{
+			if (!shouldCopy[i])
+				continue;
+
+			if (strcmp(lexeme[curLexIndex].lexeme, lexeme[i].lexeme) == 0)
+			{
+				if (lexeme[curLexIndex].nvariant == lexeme[i].nvariant)
+				{
+					shouldCopy[i] = false;
+					shouldCopyCount--;
+					continue;
+				}
+				else
+				{
+					/*
+					 * Check for same set of lexemes in another nvariant
+					 * series
+					 */
+					int			nvariantCountL = 0;
+					int			nvariantCountR = 0;
+					int			nvariantOverlap = 1;
+					int			j;
+
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[curLexIndex].nvariant == lexeme[j].nvariant)
+							nvariantCountL++;
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[i].nvariant == lexeme[j].nvariant)
+							nvariantCountR++;
+
+					if (nvariantCountL != nvariantCountR)
+						continue;
+
+					for (j = 1; j < nvariantCountR; j++)
+					{
+						if (strcmp(lexeme[curLexIndex + j].lexeme, lexeme[i + j].lexeme) == 0
+							&& lexeme[curLexIndex + j].nvariant == lexeme[i + j].nvariant)
+							nvariantOverlap++;
+					}
+
+					if (nvariantOverlap != nvariantCountR)
+						continue;
+
+					for (j = 0; j < nvariantCountR; j++)
+						shouldCopy[i + j] = false;
+				}
+			}
+		}
+	}
+
+	res = palloc0(sizeof(TSLexeme) * (shouldCopyCount + 1));
+
+	for (i = 0, curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		if (shouldCopy[curLexIndex])
+		{
+			memcpy(res + i, lexeme + curLexIndex, sizeof(TSLexeme));
+			i++;
+		}
+	}
+
+	pfree(shouldCopy);
+	return res;
+}
+
+/*
+ * Combine two lexeme lists with respect to positions
+ */
+static TSLexeme *
+TSLexemeMergePositions(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+
+	if (left != NULL || right != NULL)
+	{
+		int			left_i = 0;
+		int			right_i = 0;
+		int			left_max_nvariant = 0;
+		int			i;
+		int			left_size = TSLexemeGetSize(left);
+		int			right_size = TSLexemeGetSize(right);
+
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		for (i = 0; i < right_size; i++)
+			right[i].nvariant += left_max_nvariant;
+		if (right && right[0].flags & TSL_ADDPOS)
+			right[0].flags &= ~TSL_ADDPOS;
+
+		i = 0;
+		while (i < left_size + right_size)
+		{
+			if (left_i < left_size)
+			{
+				do
+				{
+					result[i++] = left[left_i++];
+				} while (left && left[left_i].lexeme && (left[left_i].flags & TSL_ADDPOS) == 0);
+			}
+
+			if (right_i < right_size)
+			{
+				do
+				{
+					result[i++] = right[right_i++];
+				} while (right && right[right_i].lexeme && (right[right_i].flags & TSL_ADDPOS) == 0);
+			}
+		}
+	}
+	return result;
+}
+
+/*
+ * Split lexemes generated by regular dictionaries and multi-input dictionaries
+ * and combine them with respect to positions
+ */
+static TSLexeme *
+TSLexemeFilterMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *result;
+	TSLexeme   *ptr = lexemes;
+	int			multi_lexemes = 0;
+
+	while (ptr && ptr->lexeme)
+	{
+		if (ptr->flags & TSL_MULTI)
+			multi_lexemes++;
+		ptr++;
+	}
+
+	if (multi_lexemes > 0)
+	{
+		TSLexeme   *lexemes_multi = palloc0(sizeof(TSLexeme) * (multi_lexemes + 1));
+		TSLexeme   *lexemes_rest = palloc0(sizeof(TSLexeme) * (TSLexemeGetSize(lexemes) - multi_lexemes + 1));
+		int			rest_i = 0;
+		int			multi_i = 0;
+
+		ptr = lexemes;
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr->flags & TSL_MULTI)
+				lexemes_multi[multi_i++] = *ptr;
+			else
+				lexemes_rest[rest_i++] = *ptr;
+
+			ptr++;
+		}
+		result = TSLexemeMergePositions(lexemes_rest, lexemes_multi);
+	}
+	else
+	{
+		result = TSLexemeMergePositions(lexemes, NULL);
+	}
+
+	return result;
+}
+
+/*
+ * Mark lexemes as generated by multi-input (thesaurus-like) dictionary
+ */
+static void
+TSLexemeMarkMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *ptr = lexemes;
+
+	while (ptr && ptr->lexeme)
+	{
+		ptr->flags |= TSL_MULTI;
+		ptr++;
+	}
+}
+
+/*-------------------
+ * Lexemes set operations
+ *-------------------
+ */
+
+/*
+ * Combine left and right lexeme lists into one.
+ * If append is true, right lexemes added after last left lexeme with TSL_ADDPOS flag
+ */
+static TSLexeme *
+TSLexemeUnionOpt(TSLexeme *left, TSLexeme *right, bool append)
+{
+	TSLexeme   *result;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+	int			left_max_nvariant = 0;
+	int			i;
+
+	if (left == NULL && right == NULL)
+	{
+		result = NULL;
+	}
+	else
+	{
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		if (left_size > 0)
+			memcpy(result, left, sizeof(TSLexeme) * left_size);
+		if (right_size > 0)
+			memcpy(result + left_size, right, sizeof(TSLexeme) * right_size);
+		if (append && left_size > 0 && right_size > 0)
+			result[left_size].flags |= TSL_ADDPOS;
+
+		for (i = left_size; i < left_size + right_size; i++)
+			result[i].nvariant += left_max_nvariant;
+	}
+
+	return result;
+}
+
+/*
+ * Combine left and right lexeme lists into one
+ */
+static TSLexeme *
+TSLexemeUnion(TSLexeme *left, TSLexeme *right)
+{
+	return TSLexemeUnionOpt(left, right, false);
+}
+
+/*
+ * Remove common lexemes and return only which is stored in left list
+ */
+static TSLexeme *
+TSLexemeExcept(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (!found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*
+ * Keep only common lexemes
+ */
+static TSLexeme *
+TSLexemeIntersect(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*-------------------
+ * Result storage functions
+ *-------------------
+ */
+
+/*
+ * Add a lexeme to the result storage
+ */
+static void
+ResultStorageAdd(ResultStorage *storage, ParsedLex *token, TSLexeme *lexs)
+{
+	TSLexeme   *oldLexs = storage->lexemes;
+
+	storage->lexemes = TSLexemeUnionOpt(storage->lexemes, lexs, true);
+	if (oldLexs)
+		pfree(oldLexs);
+}
+
+/*
+ * Move all saved lexemes to accepted list
+ */
+static void
+ResultStorageMoveToAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+	{
+		TSLexeme   *prevAccepted = storage->accepted;
+
+		storage->accepted = TSLexemeUnionOpt(storage->accepted, storage->lexemes, true);
+		if (prevAccepted)
+			pfree(prevAccepted);
+		if (storage->lexemes)
+			pfree(storage->lexemes);
+	}
+	else
+	{
+		storage->accepted = storage->lexemes;
+	}
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all non-accepted lexemes
+ */
+static void
+ResultStorageClearLexemes(ResultStorage *storage)
+{
+	if (storage->lexemes)
+		pfree(storage->lexemes);
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all accepted lexemes
+ */
+static void
+ResultStorageClearAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+		pfree(storage->accepted);
+	storage->accepted = NULL;
+}
+
+/*-------------------
+ * Condition and command execution
+ *-------------------
+ */
+
+/*
+ * Process a token by the dictionary
+ */
+static TSLexeme *
+LexizeExecDictionary(LexizeData *ld, ParsedLex *token, TSMapElement *dictionary)
+{
+	TSLexeme   *res;
+	TSDictionaryCacheEntry *dict;
+	DictSubState subState;
+	Oid			dictId = dictionary->value.objectDictionary;
+
+	if (ld->skipDictionary == dictId)
+		return NULL;
+
+	if (LexemesBufferContains(&ld->buffer, dictionary, token))
+		res = LexemesBufferGet(&ld->buffer, dictionary, token);
+	else
+	{
+		char	   *curValLemm = token->lemm;
+		int			curValLenLemm = token->lenlemm;
+		DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+		dict = lookup_ts_dictionary_cache(dictId);
+
+		if (state)
+		{
+			subState = state->subState;
+			state->processed = true;
+		}
+		else
+		{
+			subState.isend = subState.getnext = false;
+			subState.private_state = NULL;
+		}
+
+		res = (TSLexeme *) DatumGetPointer(FunctionCall4(&(dict->lexize),
+														 PointerGetDatum(dict->dictData),
+														 PointerGetDatum(curValLemm),
+														 Int32GetDatum(curValLenLemm),
+														 PointerGetDatum(&subState)
+														 ));
+
+		if (subState.getnext)
+		{
+			/*
+			 * Dictionary wants next word, so store current context and state
+			 * in the DictStateList
+			 */
+			if (state == NULL)
+			{
+				state = palloc0(sizeof(DictState));
+				state->processed = true;
+				state->relatedDictionary = dictId;
+				state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				state->acceptedTokens.head = state->acceptedTokens.tail = NULL;
+				state->tmpResult = NULL;
+
+				/*
+				 * Add state to the list and update pointer in order to work
+				 * with copy from the list
+				 */
+				state = DictStateListAdd(&ld->dslist, state);
+			}
+
+			state->subState = subState;
+			state->storeToAccepted = res != NULL;
+
+			if (res)
+			{
+				if (state->intermediateTokens.head != NULL)
+				{
+					ParsedLex  *ptr = state->intermediateTokens.head;
+
+					while (ptr)
+					{
+						LPLAddTailCopy(&state->acceptedTokens, ptr);
+						ptr = ptr->next;
+					}
+					state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				}
+
+				if (state->tmpResult)
+					pfree(state->tmpResult);
+				TSLexemeMarkMulti(res);
+				state->tmpResult = res;
+				res = NULL;
+			}
+		}
+		else if (state != NULL)
+		{
+			if (res)
+			{
+				if (state)
+					TSLexemeMarkMulti(res);
+				DictStateListRemove(&ld->dslist, dictId);
+			}
+			else
+			{
+				/*
+				 * Trigger post-processing in order to check tmpResult and
+				 * restart processing (see LexizeExec function)
+				 */
+				state->processed = false;
+			}
+		}
+		LexemesBufferAdd(&ld->buffer, dictionary, token, res);
+	}
+
+	return res;
+}
+
+/*
+ * Check is dictionary waits for more tokens or not
+ */
+static bool
+LexizeExecDictionaryWaitNext(LexizeData *ld, Oid dictId)
+{
+	DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+	if (state)
+		return state->subState.getnext;
+	else
+		return false;
+}
+
+/*
+ * Check is dictionary result for current token is NULL or not.
+ * It dictionary waits for more lexemes, the result is interpreted as not null.
+ */
+static bool
+LexizeExecIsNull(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	bool		result = false;
+
+	if (config->type == TSMAP_EXPRESSION)
+	{
+		TSMapExpression *expression = config->value.objectExpression;
+
+		result = LexizeExecIsNull(ld, token, expression->left) || LexizeExecIsNull(ld, token, expression->right);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		Oid			dictOid = config->value.objectDictionary;
+		TSLexeme   *lexemes = LexizeExecDictionary(ld, token, config);
+
+		if (lexemes)
+			result = false;
+		else
+			result = !LexizeExecDictionaryWaitNext(ld, dictOid);
+	}
+	return result;
+}
+
+/*
+ * Execute a MAP operator
+ */
+static TSLexeme *
+TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression)
+{
+	TSLexeme   *left_res;
+	TSLexeme   *result = NULL;
+	int			left_size;
+	int			i;
+
+	left_res = LexizeExecTSElement(ld, token, expression->left);
+	left_size = TSLexemeGetSize(left_res);
+
+	if (left_res == NULL)
+		result = LexizeExecTSElement(ld, token, expression->right);
+	else
+	{
+		for (i = 0; i < left_size; i++)
+		{
+			TSLexeme   *tmp_res = NULL;
+			TSLexeme   *prev_res;
+			ParsedLex	tmp_token;
+
+			tmp_token.lemm = left_res[i].lexeme;
+			tmp_token.lenlemm = strlen(left_res[i].lexeme);
+			tmp_token.type = token->type;
+			tmp_token.next = NULL;
+
+			tmp_res = LexizeExecTSElement(ld, &tmp_token, expression->right);
+			prev_res = result;
+			result = TSLexemeUnion(prev_res, tmp_res);
+			if (prev_res)
+				pfree(prev_res);
+		}
+	}
+
+	return result;
+}
+
+/*
+ * Execute a TSMapElement
+ * Common point of all possible types of TSMapElement
+ */
+static TSLexeme *
+LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	TSLexeme   *result = NULL;
+
+	if (LexemesBufferContains(&ld->buffer, config, token))
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexemesBufferGet(&ld->buffer, config, token);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexizeExecDictionary(ld, token, config);
+	}
+	else if (config->type == TSMAP_CASE)
+	{
+		TSMapCase  *caseObject = config->value.objectCase;
+		bool		conditionIsNull = LexizeExecIsNull(ld, token, caseObject->condition);
+
+		if ((!conditionIsNull && caseObject->match) || (conditionIsNull && !caseObject->match))
+		{
+			if (caseObject->command->type == TSMAP_KEEP)
+				result = LexizeExecTSElement(ld, token, caseObject->condition);
+			else
+				result = LexizeExecTSElement(ld, token, caseObject->command);
+		}
+		else if (caseObject->elsebranch)
+			result = LexizeExecTSElement(ld, token, caseObject->elsebranch);
+	}
+	else if (config->type == TSMAP_EXPRESSION)
+	{
+		TSLexeme   *resLeft = NULL;
+		TSLexeme   *resRight = NULL;
+		TSMapElement *relatedRuleTmp;
+		TSMapExpression *expression = config->value.objectExpression;
+
+		if (ld->debugContext)
+		{
+			relatedRuleTmp = palloc0(sizeof(TSMapElement));
+			relatedRuleTmp->parent = NULL;
+			relatedRuleTmp->type = TSMAP_EXPRESSION;
+			relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+			relatedRuleTmp->value.objectExpression->operator = expression->operator;
+		}
 
-	if (list->head)
-		list->head = list->head->next;
+		if (expression->operator != TSMAP_OP_MAP)
+		{
+			resLeft = LexizeExecTSElement(ld, token, expression->left);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->left = token->relatedRule;
 
-	if (list->head == NULL)
-		list->tail = NULL;
+			resRight = LexizeExecTSElement(ld, token, expression->right);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->right = token->relatedRule;
+		}
 
-	return res;
-}
+		switch (expression->operator)
+		{
+			case TSMAP_OP_UNION:
+				result = TSLexemeUnion(resLeft, resRight);
+				break;
+			case TSMAP_OP_EXCEPT:
+				result = TSLexemeExcept(resLeft, resRight);
+				break;
+			case TSMAP_OP_INTERSECT:
+				result = TSLexemeIntersect(resLeft, resRight);
+				break;
+			case TSMAP_OP_MAP:
+				result = TSLexemeMap(ld, token, expression);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains invalid expression operator.")));
+				break;
+		}
 
-static void
-LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
-{
-	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+		if (ld->debugContext)
+			token->relatedRule = relatedRuleTmp;
+	}
 
-	newpl->type = type;
-	newpl->lemm = lemm;
-	newpl->lenlemm = lenlemm;
-	LPLAddTail(&ld->towork, newpl);
-	ld->curSub = ld->towork.tail;
+	if (!LexemesBufferContains(&ld->buffer, config, token))
+		LexemesBufferAdd(&ld->buffer, config, token, result);
+
+	return result;
 }
 
-static void
-RemoveHead(LexizeData *ld)
+/*-------------------
+ * LexizeExec and helpers functions
+ *-------------------
+ */
+
+/*
+ * Processing of EOF-like token.
+ * Return all temporary results if any are saved.
+ */
+static TSLexeme *
+LexizeExecFinishProcessing(LexizeData *ld)
 {
-	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+	int			i;
+	TSLexeme   *res = NULL;
+
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		TSLexeme   *last_res = res;
 
-	ld->posDict = 0;
+		res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+		if (last_res)
+			pfree(last_res);
+	}
+
+	return res;
 }
 
-static void
-setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+/*
+ * Get last accepted result of the phrase-dictionary
+ */
+static TSLexeme *
+LexizeExecGetPreviousResults(LexizeData *ld)
 {
-	if (correspondLexem)
-	{
-		*correspondLexem = ld->waste.head;
-	}
-	else
-	{
-		ParsedLex  *tmp,
-				   *ptr = ld->waste.head;
+	int			i;
+	TSLexeme   *res = NULL;
 
-		while (ptr)
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		if (!ld->dslist.states[i].processed)
 		{
-			tmp = ptr->next;
-			pfree(ptr);
-			ptr = tmp;
+			TSLexeme   *last_res = res;
+
+			res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+			if (last_res)
+				pfree(last_res);
 		}
 	}
-	ld->waste.head = ld->waste.tail = NULL;
+
+	return res;
 }
 
+/*
+ * Remove all dictionary states which wasn't used for current token
+ */
 static void
-moveToWaste(LexizeData *ld, ParsedLex *stop)
+LexizeExecClearDictStates(LexizeData *ld)
 {
-	bool		go = true;
+	int			i;
 
-	while (ld->towork.head && go)
+	for (i = 0; i < ld->dslist.listLength; i++)
 	{
-		if (ld->towork.head == stop)
+		if (!ld->dslist.states[i].processed)
 		{
-			ld->curSub = stop->next;
-			go = false;
+			DictStateListRemove(&ld->dslist, ld->dslist.states[i].relatedDictionary);
+			i = 0;
 		}
-		RemoveHead(ld);
 	}
 }
 
-static void
-setNewTmpRes(LexizeData *ld, ParsedLex *lex, TSLexeme *res)
+/*
+ * Check if there are any dictionaries that didn't processed current token
+ */
+static bool
+LexizeExecNotProcessedDictStates(LexizeData *ld)
 {
-	if (ld->tmpRes)
-	{
-		TSLexeme   *ptr;
+	int			i;
 
-		for (ptr = ld->tmpRes; ptr->lexeme; ptr++)
-			pfree(ptr->lexeme);
-		pfree(ld->tmpRes);
-	}
-	ld->tmpRes = res;
-	ld->lastRes = lex;
+	for (i = 0; i < ld->dslist.listLength; i++)
+		if (!ld->dslist.states[i].processed)
+			return true;
+
+	return false;
 }
 
+/*
+ * Do a lexize processing for a towork queue in LexizeData
+ */
 static TSLexeme *
 LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 {
+	ParsedLex  *token;
+	TSMapElement *config;
+	TSLexeme   *res = NULL;
+	TSLexeme   *prevIterationResult = NULL;
+	bool		removeHead = false;
+	bool		resetSkipDictionary = false;
+	bool		accepted = false;
 	int			i;
-	ListDictionary *map;
-	TSDictionaryCacheEntry *dict;
-	TSLexeme   *res;
 
-	if (ld->curDictId == InvalidOid)
+	for (i = 0; i < ld->dslist.listLength; i++)
+		ld->dslist.states[i].processed = false;
+	if (ld->skipDictionary != InvalidOid)
+		resetSkipDictionary = true;
+
+	token = ld->towork.head;
+	if (token == NULL)
 	{
-		/*
-		 * usual mode: dictionary wants only one word, but we should keep in
-		 * mind that we should go through all stack
-		 */
+		setCorrLex(ld, correspondLexem);
+		return NULL;
+	}
 
-		while (ld->towork.head)
+	if (token->type >= ld->cfg->lenmap)
+	{
+		removeHead = true;
+	}
+	else
+	{
+		config = ld->cfg->map[token->type];
+		if (config != NULL)
+		{
+			res = LexizeExecTSElement(ld, token, config);
+			prevIterationResult = LexizeExecGetPreviousResults(ld);
+			removeHead = prevIterationResult == NULL;
+		}
+		else
 		{
-			ParsedLex  *curVal = ld->towork.head;
-			char	   *curValLemm = curVal->lemm;
-			int			curValLenLemm = curVal->lenlemm;
+			removeHead = true;
+			if (token->type == 0)	/* Processing EOF-like token */
+			{
+				res = LexizeExecFinishProcessing(ld);
+				prevIterationResult = NULL;
+			}
+		}
 
-			map = ld->cfg->map + curVal->type;
+		if (LexizeExecNotProcessedDictStates(ld) && (token->type == 0 || config != NULL))	/* Rollback processing */
+		{
+			int			i;
+			ListParsedLex *intermediateTokens = NULL;
+			ListParsedLex *acceptedTokens = NULL;
 
-			if (curVal->type == 0 || curVal->type >= ld->cfg->lenmap || map->len == 0)
+			for (i = 0; i < ld->dslist.listLength; i++)
 			{
-				/* skip this type of lexeme */
-				RemoveHead(ld);
-				continue;
+				if (!ld->dslist.states[i].processed)
+				{
+					intermediateTokens = &ld->dslist.states[i].intermediateTokens;
+					acceptedTokens = &ld->dslist.states[i].acceptedTokens;
+					if (prevIterationResult == NULL)
+						ld->skipDictionary = ld->dslist.states[i].relatedDictionary;
+				}
 			}
 
-			for (i = ld->posDict; i < map->len; i++)
+			if (intermediateTokens && intermediateTokens->head)
 			{
-				dict = lookup_ts_dictionary_cache(map->dictIds[i]);
-
-				ld->dictState.isend = ld->dictState.getnext = false;
-				ld->dictState.private_state = NULL;
-				res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-																 &(dict->lexize),
-																 PointerGetDatum(dict->dictData),
-																 PointerGetDatum(curValLemm),
-																 Int32GetDatum(curValLenLemm),
-																 PointerGetDatum(&ld->dictState)
-																 ));
-
-				if (ld->dictState.getnext)
+				ParsedLex  *head = ld->towork.head;
+
+				ld->towork.head = intermediateTokens->head;
+				intermediateTokens->tail->next = head;
+				head->next = NULL;
+				ld->towork.tail = head;
+				removeHead = false;
+				LPLClear(&ld->waste);
+				if (acceptedTokens && acceptedTokens->head)
 				{
-					/*
-					 * dictionary wants next word, so setup and store current
-					 * position and go to multiword mode
-					 */
-
-					ld->curDictId = DatumGetObjectId(map->dictIds[i]);
-					ld->posDict = i + 1;
-					ld->curSub = curVal->next;
-					if (res)
-						setNewTmpRes(ld, curVal, res);
-					return LexizeExec(ld, correspondLexem);
+					ld->waste.head = acceptedTokens->head;
+					ld->waste.tail = acceptedTokens->tail;
 				}
+			}
+			ResultStorageClearLexemes(&ld->delayedResults);
+			if (config != NULL)
+				res = NULL;
+		}
 
-				if (!res)		/* dictionary doesn't know this lexeme */
-					continue;
+		if (config != NULL)
+			LexizeExecClearDictStates(ld);
+		else if (token->type == 0)
+			DictStateListClear(&ld->dslist);
+	}
 
-				if (res->flags & TSL_FILTER)
-				{
-					curValLemm = res->lexeme;
-					curValLenLemm = strlen(res->lexeme);
-					continue;
-				}
+	if (prevIterationResult)
+		res = prevIterationResult;
+	else
+	{
+		int			i;
 
-				RemoveHead(ld);
-				setCorrLex(ld, correspondLexem);
-				return res;
+		for (i = 0; i < ld->dslist.listLength; i++)
+		{
+			if (ld->dslist.states[i].storeToAccepted)
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].acceptedTokens, token);
+				accepted = true;
+				ld->dslist.states[i].storeToAccepted = false;
+			}
+			else
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].intermediateTokens, token);
 			}
-
-			RemoveHead(ld);
 		}
 	}
-	else
-	{							/* curDictId is valid */
-		dict = lookup_ts_dictionary_cache(ld->curDictId);
 
+	if (removeHead)
+		RemoveHead(ld);
+
+	if (ld->dslist.listLength > 0)
+	{
 		/*
-		 * Dictionary ld->curDictId asks  us about following words
+		 * There is at least one thesaurus dictionary in the middle of
+		 * processing. Delay return of the result to avoid wrong lexemes in
+		 * case of thesaurus phrase rejection.
 		 */
+		ResultStorageAdd(&ld->delayedResults, token, res);
+		if (accepted)
+			ResultStorageMoveToAccepted(&ld->delayedResults);
 
-		while (ld->curSub)
+		/*
+		 * Current value of res should not be cleared, because it is stored in
+		 * LexemesBuffer
+		 */
+		res = NULL;
+	}
+	else
+	{
+		if (ld->towork.head == NULL)
 		{
-			ParsedLex  *curVal = ld->curSub;
-
-			map = ld->cfg->map + curVal->type;
-
-			if (curVal->type != 0)
-			{
-				bool		dictExists = false;
-
-				if (curVal->type >= ld->cfg->lenmap || map->len == 0)
-				{
-					/* skip this type of lexeme */
-					ld->curSub = curVal->next;
-					continue;
-				}
+			TSLexeme   *oldAccepted = ld->delayedResults.accepted;
 
-				/*
-				 * We should be sure that current type of lexeme is recognized
-				 * by our dictionary: we just check is it exist in list of
-				 * dictionaries ?
-				 */
-				for (i = 0; i < map->len && !dictExists; i++)
-					if (ld->curDictId == DatumGetObjectId(map->dictIds[i]))
-						dictExists = true;
-
-				if (!dictExists)
-				{
-					/*
-					 * Dictionary can't work with current tpe of lexeme,
-					 * return to basic mode and redo all stored lexemes
-					 */
-					ld->curDictId = InvalidOid;
-					return LexizeExec(ld, correspondLexem);
-				}
-			}
+			ld->delayedResults.accepted = TSLexemeUnionOpt(ld->delayedResults.accepted, ld->delayedResults.lexemes, true);
+			if (oldAccepted)
+				pfree(oldAccepted);
+		}
 
-			ld->dictState.isend = (curVal->type == 0) ? true : false;
-			ld->dictState.getnext = false;
+		/*
+		 * Add accepted delayed results to the output of the parsing. All
+		 * lexemes returned during thesaurus pharse processing should be
+		 * returned simultaneously, since all phrase tokens are processed as
+		 * one.
+		 */
+		if (ld->delayedResults.accepted != NULL)
+		{
+			/*
+			 * Previous value of res should not be cleared, because it is
+			 * stored in LexemesBuffer
+			 */
+			res = TSLexemeUnionOpt(ld->delayedResults.accepted, res, prevIterationResult == NULL);
 
-			res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-															 &(dict->lexize),
-															 PointerGetDatum(dict->dictData),
-															 PointerGetDatum(curVal->lemm),
-															 Int32GetDatum(curVal->lenlemm),
-															 PointerGetDatum(&ld->dictState)
-															 ));
+			ResultStorageClearLexemes(&ld->delayedResults);
+			ResultStorageClearAccepted(&ld->delayedResults);
+		}
+		setCorrLex(ld, correspondLexem);
+	}
 
-			if (ld->dictState.getnext)
-			{
-				/* Dictionary wants one more */
-				ld->curSub = curVal->next;
-				if (res)
-					setNewTmpRes(ld, curVal, res);
-				continue;
-			}
+	if (resetSkipDictionary)
+		ld->skipDictionary = InvalidOid;
 
-			if (res || ld->tmpRes)
-			{
-				/*
-				 * Dictionary normalizes lexemes, so we remove from stack all
-				 * used lexemes, return to basic mode and redo end of stack
-				 * (if it exists)
-				 */
-				if (res)
-				{
-					moveToWaste(ld, ld->curSub);
-				}
-				else
-				{
-					res = ld->tmpRes;
-					moveToWaste(ld, ld->lastRes);
-				}
+	res = TSLexemeFilterMulti(res);
+	if (res)
+		res = TSLexemeRemoveDuplications(res);
 
-				/* reset to initial state */
-				ld->curDictId = InvalidOid;
-				ld->posDict = 0;
-				ld->lastRes = NULL;
-				ld->tmpRes = NULL;
-				setCorrLex(ld, correspondLexem);
-				return res;
-			}
+	/*
+	 * Copy result since it may be stored in LexemesBuffere and removed at the
+	 * next step.
+	 */
+	if (res)
+	{
+		TSLexeme   *oldRes = res;
+		int			resSize = TSLexemeGetSize(res);
 
-			/*
-			 * Dict don't want next lexem and didn't recognize anything, redo
-			 * from ld->towork.head
-			 */
-			ld->curDictId = InvalidOid;
-			return LexizeExec(ld, correspondLexem);
-		}
+		res = palloc0(sizeof(TSLexeme) * (resSize + 1));
+		memcpy(res, oldRes, sizeof(TSLexeme) * resSize);
 	}
 
-	setCorrLex(ld, correspondLexem);
-	return NULL;
+	LexemesBufferClear(&ld->buffer);
+	return res;
 }
 
+/*-------------------
+ * ts_parse API functions
+ *-------------------
+ */
+
 /*
  * Parse string and lexize words.
  *
@@ -357,7 +1432,7 @@ LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 void
 parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
@@ -375,36 +1450,42 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		while ((norms = LexizeExec(&ldata, NULL)) != NULL)
 		{
-			TSLexeme   *ptr = norms;
+			TSLexeme   *ptr;
+
+			ptr = norms;
 
 			prs->pos++;			/* set pos */
 
@@ -429,14 +1510,245 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 			}
 			pfree(norms);
 		}
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
 
+/*-------------------
+ * ts_debug and helper functions
+ *-------------------
+ */
+
+/*
+ * Free memory occupied by temporary TSMapElement
+ */
+
+static void
+ts_debug_free_rule(TSMapElement *element)
+{
+	if (element->type == TSMAP_EXPRESSION)
+	{
+		ts_debug_free_rule(element->value.objectExpression->left);
+		ts_debug_free_rule(element->value.objectExpression->right);
+		pfree(element->value.objectExpression);
+		pfree(element);
+	}
+}
+
+/*
+ * Initialize SRF context and text parser for ts_debug execution.
+ */
+static void
+ts_debug_init(Oid cfgId, text *inputText, FunctionCallInfo fcinfo)
+{
+	TupleDesc	tupdesc;
+	char	   *buf;
+	int			buflen;
+	FuncCallContext *funcctx;
+	MemoryContext oldcontext;
+	TSDebugContext *context;
+
+	funcctx = SRF_FIRSTCALL_INIT();
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+	buf = text_to_cstring(inputText);
+	buflen = strlen(buf);
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("function returning record called in context "
+						"that cannot accept type record")));
+
+	funcctx->user_fctx = palloc0(sizeof(TSDebugContext));
+	funcctx->attinmeta = TupleDescGetAttInMetadata(tupdesc);
+
+	context = funcctx->user_fctx;
+	context->cfg = lookup_ts_config_cache(cfgId);
+	context->prsobj = lookup_ts_parser_cache(context->cfg->prsId);
+
+	context->tokenTypes = (LexDescr *) DatumGetPointer(OidFunctionCall1(context->prsobj->lextypeOid,
+																		(Datum) 0));
+
+	context->prsdata = (void *) DatumGetPointer(FunctionCall2(&context->prsobj->prsstart,
+															  PointerGetDatum(buf),
+															  Int32GetDatum(buflen)));
+	LexizeInit(&context->ldata, context->cfg);
+	context->ldata.debugContext = true;
+	context->tokentype = 1;
+
+	MemoryContextSwitchTo(oldcontext);
+}
+
+/*
+ * Get one token from input text and add it to processing queue.
+ */
+static void
+ts_debug_get_token(FuncCallContext *funcctx)
+{
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+	int			lenlemm;
+	char	   *lemm = NULL;
+
+	context = funcctx->user_fctx;
+
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+	context->tokentype = DatumGetInt32(FunctionCall3(&(context->prsobj->prstoken),
+													 PointerGetDatum(context->prsdata),
+													 PointerGetDatum(&lemm),
+													 PointerGetDatum(&lenlemm)));
+
+	if (context->tokentype > 0 && lenlemm >= MAXSTRLEN)
+	{
+#ifdef IGNORE_LONGLEXEME
+		ereport(NOTICE,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#else
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#endif
+	}
+
+	LexizeAddLemm(&context->ldata, context->tokentype, lemm, lenlemm);
+	MemoryContextSwitchTo(oldcontext);
+}
+
 /*
+ * Parse text and print debug information, such as token type, dictionary map
+ * configuration, selected command and lexemes for each token.
+ * Arguments: regconfiguration(Oid) cfgId, text *inputText
+ */
+Datum
+ts_debug(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		Oid			cfgId = PG_GETARG_OID(0);
+		text	   *inputText = PG_GETARG_TEXT_P(1);
+
+		ts_debug_init(cfgId, inputText, fcinfo);
+	}
+
+	funcctx = SRF_PERCALL_SETUP();
+	context = funcctx->user_fctx;
+
+	while (context->tokentype > 0 && context->leftTokens == NULL)
+	{
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+		ts_debug_get_token(funcctx);
+
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	while (context->leftTokens == NULL && context->ldata.towork.head != NULL)
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+	if (context->leftTokens && context->leftTokens && context->leftTokens->type > 0)
+	{
+		HeapTuple	tuple;
+		Datum		result;
+		char	  **values;
+		ParsedLex  *lex = context->leftTokens;
+		StringInfo	str = NULL;
+		TSLexeme   *ptr;
+
+		values = palloc0(sizeof(char *) * 7);
+		str = makeStringInfo();
+		initStringInfo(str);
+
+		values[0] = context->tokenTypes[lex->type - 1].alias;
+		values[1] = context->tokenTypes[lex->type - 1].descr;
+
+		values[2] = palloc0(sizeof(char) * (lex->lenlemm + 1));
+		memcpy(values[2], lex->lemm, sizeof(char) * lex->lenlemm);
+
+		initStringInfo(str);
+		appendStringInfoChar(str, '{');
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			Oid *dictionaries = TSMapGetDictionaries(context->ldata.cfg->map[lex->type]);
+			Oid *currentDictionary = NULL;
+			for (currentDictionary = dictionaries; *currentDictionary != InvalidOid; currentDictionary++)
+			{
+				if (currentDictionary != dictionaries)
+					appendStringInfoChar(str, ',');
+
+				TSMapPrintDictName(*currentDictionary, str);
+			}
+		}
+		appendStringInfoChar(str, '}');
+		values[3] = str->data;
+
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			initStringInfo(str);
+			TSMapPrintElement(context->ldata.cfg->map[lex->type], str);
+			values[4] = str->data;
+
+			initStringInfo(str);
+			if (lex->relatedRule)
+			{
+				TSMapPrintElement(lex->relatedRule, str);
+				values[5] = str->data;
+				str = makeStringInfo();
+				initStringInfo(str);
+				ts_debug_free_rule(lex->relatedRule);
+				lex->relatedRule = NULL;
+			}
+		}
+
+		ptr = context->savedLexemes;
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '{');
+
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr != context->savedLexemes)
+				appendStringInfoString(str, ", ");
+			appendStringInfoString(str, ptr->lexeme);
+			ptr++;
+		}
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '}');
+		if (context->savedLexemes)
+			values[6] = str->data;
+		else
+			values[6] = NULL;
+
+		tuple = BuildTupleFromCStrings(funcctx->attinmeta, values);
+		result = HeapTupleGetDatum(tuple);
+
+		context->leftTokens = lex->next;
+		pfree(lex);
+		if (context->leftTokens == NULL && context->savedLexemes)
+			pfree(context->savedLexemes);
+
+		SRF_RETURN_NEXT(funcctx, result);
+	}
+
+	FunctionCall1(&(context->prsobj->prsend), PointerGetDatum(context->prsdata));
+	SRF_RETURN_DONE(funcctx);
+}
+
+/*-------------------
  * Headline framework
+ *-------------------
  */
+
 static void
 hladdword(HeadlineParsedText *prs, char *buf, int buflen, int type)
 {
@@ -532,12 +1844,12 @@ addHLParsedLex(HeadlineParsedText *prs, TSQuery query, ParsedLex *lexs, TSLexeme
 void
 hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
 	TSLexeme   *norms;
-	ParsedLex  *lexs;
+	ParsedLex  *lexs = NULL;
 	TSConfigCacheEntry *cfg;
 	TSParserCacheEntry *prsobj;
 	void	   *prsdata;
@@ -551,32 +1863,36 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		do
 		{
@@ -587,9 +1903,10 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 			}
 			else
 				addHLParsedLex(prs, query, lexs, NULL);
+			lexs = NULL;
 		} while (norms);
 
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
@@ -642,14 +1959,14 @@ generateHeadline(HeadlineParsedText *prs)
 			}
 			else if (!wrd->skip)
 			{
-				if (wrd->selected)
+				if (wrd->selected && (wrd == prs->words || !(wrd - 1)->selected))
 				{
 					memcpy(ptr, prs->startsel, prs->startsellen);
 					ptr += prs->startsellen;
 				}
 				memcpy(ptr, wrd->word, wrd->len);
 				ptr += wrd->len;
-				if (wrd->selected)
+				if (wrd->selected && ((wrd + 1 - prs->words) == prs->curwords || !(wrd + 1)->selected))
 				{
 					memcpy(ptr, prs->stopsel, prs->stopsellen);
 					ptr += prs->stopsellen;
diff --git a/src/backend/tsearch/ts_utils.c b/src/backend/tsearch/ts_utils.c
index 56d4cf03e5..068a684cae 100644
--- a/src/backend/tsearch/ts_utils.c
+++ b/src/backend/tsearch/ts_utils.c
@@ -20,7 +20,6 @@
 #include "tsearch/ts_locale.h"
 #include "tsearch/ts_utils.h"
 
-
 /*
  * Given the base name and extension of a tsearch config file, return
  * its full path name.  The base name is assumed to be user-supplied,
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 888edbb325..0628b9c2a9 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -828,11 +828,10 @@ static const struct cachedesc cacheinfo[] = {
 	},
 	{TSConfigMapRelationId,		/* TSCONFIGMAP */
 		TSConfigMapIndexId,
-		3,
+		2,
 		{
 			Anum_pg_ts_config_map_mapcfg,
 			Anum_pg_ts_config_map_maptokentype,
-			Anum_pg_ts_config_map_mapseqno,
 			0
 		},
 		2
diff --git a/src/backend/utils/cache/ts_cache.c b/src/backend/utils/cache/ts_cache.c
index 29cf93a4de..9adfddc213 100644
--- a/src/backend/utils/cache/ts_cache.c
+++ b/src/backend/utils/cache/ts_cache.c
@@ -39,6 +39,7 @@
 #include "catalog/pg_ts_template.h"
 #include "commands/defrem.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/catcache.h"
 #include "utils/fmgroids.h"
@@ -51,13 +52,12 @@
 
 
 /*
- * MAXTOKENTYPE/MAXDICTSPERTT are arbitrary limits on the workspace size
+ * MAXTOKENTYPE is arbitrary limits on the workspace size
  * used in lookup_ts_config_cache().  We could avoid hardwiring a limit
  * by making the workspace dynamically enlargeable, but it seems unlikely
  * to be worth the trouble.
  */
-#define MAXTOKENTYPE	256
-#define MAXDICTSPERTT	100
+#define MAXTOKENTYPE		256
 
 
 static HTAB *TSParserCacheHash = NULL;
@@ -415,11 +415,10 @@ lookup_ts_config_cache(Oid cfgId)
 		ScanKeyData mapskey;
 		SysScanDesc mapscan;
 		HeapTuple	maptup;
-		ListDictionary maplists[MAXTOKENTYPE + 1];
-		Oid			mapdicts[MAXDICTSPERTT];
+		TSMapElement *mapconfigs[MAXTOKENTYPE + 1];
 		int			maxtokentype;
-		int			ndicts;
 		int			i;
+		TSMapElement *tmpConfig;
 
 		tp = SearchSysCache1(TSCONFIGOID, ObjectIdGetDatum(cfgId));
 		if (!HeapTupleIsValid(tp))
@@ -450,8 +449,10 @@ lookup_ts_config_cache(Oid cfgId)
 			if (entry->map)
 			{
 				for (i = 0; i < entry->lenmap; i++)
-					if (entry->map[i].dictIds)
-						pfree(entry->map[i].dictIds);
+				{
+					if (entry->map[i])
+						TSMapElementFree(entry->map[i]);
+				}
 				pfree(entry->map);
 			}
 		}
@@ -465,13 +466,11 @@ lookup_ts_config_cache(Oid cfgId)
 		/*
 		 * Scan pg_ts_config_map to gather dictionary list for each token type
 		 *
-		 * Because the index is on (mapcfg, maptokentype, mapseqno), we will
-		 * see the entries in maptokentype order, and in mapseqno order for
-		 * each token type, even though we didn't explicitly ask for that.
+		 * Because the index is on (mapcfg, maptokentype), we will see the
+		 * entries in maptokentype order even though we didn't explicitly ask
+		 * for that.
 		 */
-		MemSet(maplists, 0, sizeof(maplists));
 		maxtokentype = 0;
-		ndicts = 0;
 
 		ScanKeyInit(&mapskey,
 					Anum_pg_ts_config_map_mapcfg,
@@ -483,6 +482,7 @@ lookup_ts_config_cache(Oid cfgId)
 		mapscan = systable_beginscan_ordered(maprel, mapidx,
 											 NULL, 1, &mapskey);
 
+		memset(mapconfigs, 0, sizeof(mapconfigs));
 		while ((maptup = systable_getnext_ordered(mapscan, ForwardScanDirection)) != NULL)
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
@@ -492,51 +492,27 @@ lookup_ts_config_cache(Oid cfgId)
 				elog(ERROR, "maptokentype value %d is out of range", toktype);
 			if (toktype < maxtokentype)
 				elog(ERROR, "maptokentype entries are out of order");
-			if (toktype > maxtokentype)
-			{
-				/* starting a new token type, but first save the prior data */
-				if (ndicts > 0)
-				{
-					maplists[maxtokentype].len = ndicts;
-					maplists[maxtokentype].dictIds = (Oid *)
-						MemoryContextAlloc(CacheMemoryContext,
-										   sizeof(Oid) * ndicts);
-					memcpy(maplists[maxtokentype].dictIds, mapdicts,
-						   sizeof(Oid) * ndicts);
-				}
-				maxtokentype = toktype;
-				mapdicts[0] = cfgmap->mapdict;
-				ndicts = 1;
-			}
-			else
-			{
-				/* continuing data for current token type */
-				if (ndicts >= MAXDICTSPERTT)
-					elog(ERROR, "too many pg_ts_config_map entries for one token type");
-				mapdicts[ndicts++] = cfgmap->mapdict;
-			}
+
+			maxtokentype = toktype;
+			tmpConfig = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			mapconfigs[maxtokentype] = TSMapMoveToMemoryContext(tmpConfig, CacheMemoryContext);
+			TSMapElementFree(tmpConfig);
+			tmpConfig = NULL;
 		}
 
 		systable_endscan_ordered(mapscan);
 		index_close(mapidx, AccessShareLock);
 		heap_close(maprel, AccessShareLock);
 
-		if (ndicts > 0)
+		if (maxtokentype > 0)
 		{
-			/* save the last token type's dictionaries */
-			maplists[maxtokentype].len = ndicts;
-			maplists[maxtokentype].dictIds = (Oid *)
-				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(Oid) * ndicts);
-			memcpy(maplists[maxtokentype].dictIds, mapdicts,
-				   sizeof(Oid) * ndicts);
-			/* and save the overall map */
+			/* save the overall map */
 			entry->lenmap = maxtokentype + 1;
-			entry->map = (ListDictionary *)
+			entry->map = (TSMapElement * *)
 				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(ListDictionary) * entry->lenmap);
-			memcpy(entry->map, maplists,
-				   sizeof(ListDictionary) * entry->lenmap);
+								   sizeof(TSMapElement *) * entry->lenmap);
+			memcpy(entry->map, mapconfigs,
+				   sizeof(TSMapElement *) * entry->lenmap);
 		}
 
 		entry->isvalid = true;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e6701aaa78..7e8dd00158 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -14208,10 +14208,11 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 					  "SELECT\n"
 					  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
 					  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
-					  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
+					  "  dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS dictname\n"
 					  "FROM pg_catalog.pg_ts_config_map AS m\n"
 					  "WHERE m.mapcfg = '%u'\n"
-					  "ORDER BY m.mapcfg, m.maptokentype, m.mapseqno",
+					  "GROUP BY m.mapcfg, m.maptokentype\n"
+					  "ORDER BY m.mapcfg, m.maptokentype",
 					  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -14225,20 +14226,14 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 		char	   *tokenname = PQgetvalue(res, i, i_tokenname);
 		char	   *dictname = PQgetvalue(res, i, i_dictname);
 
-		if (i == 0 ||
-			strcmp(tokenname, PQgetvalue(res, i - 1, i_tokenname)) != 0)
-		{
-			/* starting a new token type, so start a new command */
-			if (i > 0)
-				appendPQExpBufferStr(q, ";\n");
-			appendPQExpBuffer(q, "\nALTER TEXT SEARCH CONFIGURATION %s\n",
-							  fmtId(cfginfo->dobj.name));
-			/* tokenname needs quoting, dictname does NOT */
-			appendPQExpBuffer(q, "    ADD MAPPING FOR %s WITH %s",
-							  fmtId(tokenname), dictname);
-		}
-		else
-			appendPQExpBuffer(q, ", %s", dictname);
+		/* starting a new token type, so start a new command */
+		if (i > 0)
+			appendPQExpBufferStr(q, ";\n");
+		appendPQExpBuffer(q, "\nALTER TEXT SEARCH CONFIGURATION %s\n",
+						  fmtId(cfginfo->dobj.name));
+		/* tokenname needs quoting, dictname does NOT */
+		appendPQExpBuffer(q, "    ADD MAPPING FOR %s WITH %s",
+						  fmtId(tokenname), dictname);
 	}
 
 	if (ntups > 0)
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 3fc69c46c0..279fc2d1f2 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -4605,13 +4605,7 @@ describeOneTSConfig(const char *oid, const char *nspname, const char *cfgname,
 					  "  ( SELECT t.alias FROM\n"
 					  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
 					  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
-					  "  pg_catalog.btrim(\n"
-					  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
-					  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
-					  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
-					  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
-					  "    ) :: pg_catalog.text,\n"
-					  "  '{}') AS \"%s\"\n"
+					  " dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS \"%s\"\n"
 					  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
 					  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
 					  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h
index b13cf62bec..47f7f669ba 100644
--- a/src/include/catalog/catversion.h
+++ b/src/include/catalog/catversion.h
@@ -53,6 +53,6 @@
  */
 
 /*							yyyymmddN */
-#define CATALOG_VERSION_NO	201711301
+#define CATALOG_VERSION_NO	201712191
 
 #endif
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index ef8493674c..db487cfe57 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -260,7 +260,7 @@ DECLARE_UNIQUE_INDEX(pg_ts_config_cfgname_index, 3608, on pg_ts_config using btr
 DECLARE_UNIQUE_INDEX(pg_ts_config_oid_index, 3712, on pg_ts_config using btree(oid oid_ops));
 #define TSConfigOidIndexId	3712
 
-DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops, mapseqno int4_ops));
+DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops));
 #define TSConfigMapIndexId	3609
 
 DECLARE_UNIQUE_INDEX(pg_ts_dict_dictname_index, 3604, on pg_ts_dict using btree(dictname name_ops, dictnamespace oid_ops));
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index c969375981..2640ab8b1c 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -4925,6 +4925,12 @@ DESCR("transform jsonb to tsvector");
 DATA(insert OID = 4212 (  to_tsvector		PGNSP PGUID 12 100 0 0 0 f f f f t f i s 2 0 3614 "3734 114" _null_ _null_ _null_ _null_ _null_ json_to_tsvector_byid _null_ _null_ _null_ ));
 DESCR("transform json to tsvector");
 
+DATA(insert OID = 8891 (  dictionary_mapping_to_text	PGNSP PGUID 12 100 0 0 0 f f f f t f s s 2 0 25 "26 23" _null_ _null_ _null_ _null_ _null_ dictionary_mapping_to_text _null_ _null_ _null_ ));
+DESCR("returns text representation of dictionary configuration map");
+
+DATA(insert OID = 8892 (  ts_debug			PGNSP PGUID 12 100 1 0 0 f f f f t t s s 2 0 2249 "3734 25" "{3734,25,25,25,25,3770,25,25,1009}" "{i,i,o,o,o,o,o,o,o}" "{cfgId,inputText,alias,description,token,dictionaries,configuration,command,lexemes}" _null_ _null_ ts_debug _null_ _null_ _null_));
+DESCR("debug function for text search configuration");
+
 DATA(insert OID = 3752 (  tsvector_update_trigger			PGNSP PGUID 12 1 0 0 0 f f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_byid _null_ _null_ _null_ ));
 DESCR("trigger for automatic update of tsvector column");
 DATA(insert OID = 3753 (  tsvector_update_trigger_column	PGNSP PGUID 12 1 0 0 0 f f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_bycolumn _null_ _null_ _null_ ));
diff --git a/src/include/catalog/pg_ts_config_map.h b/src/include/catalog/pg_ts_config_map.h
index 3df05195be..f6790d2cd2 100644
--- a/src/include/catalog/pg_ts_config_map.h
+++ b/src/include/catalog/pg_ts_config_map.h
@@ -22,6 +22,7 @@
 #define PG_TS_CONFIG_MAP_H
 
 #include "catalog/genbki.h"
+#include "utils/jsonb.h"
 
 /* ----------------
  *		pg_ts_config_map definition.  cpp turns this into
@@ -30,49 +31,98 @@
  */
 #define TSConfigMapRelationId	3603
 
+/* Create a typedef in order to use same type name in
+ * generated DB initialization script and C source code
+ */
+typedef Jsonb jsonb;
+
 CATALOG(pg_ts_config_map,3603) BKI_WITHOUT_OIDS
 {
 	Oid			mapcfg;			/* OID of configuration owning this entry */
 	int32		maptokentype;	/* token type from parser */
-	int32		mapseqno;		/* order in which to consult dictionaries */
-	Oid			mapdict;		/* dictionary to consult */
+	jsonb		mapdicts;		/* dictionary map Jsonb representation */
 } FormData_pg_ts_config_map;
 
 typedef FormData_pg_ts_config_map *Form_pg_ts_config_map;
 
+typedef struct TSMapElement
+{
+	int			type;
+	union
+	{
+		struct TSMapExpression *objectExpression;
+		struct TSMapCase *objectCase;
+		Oid			objectDictionary;
+		void	   *object;
+	}			value;
+	struct TSMapElement *parent;
+} TSMapElement;
+
+typedef struct TSMapExpression
+{
+	int			operator;
+	TSMapElement *left;
+	TSMapElement *right;
+} TSMapExpression;
+
+typedef struct TSMapCase
+{
+	TSMapElement *condition;
+	TSMapElement *command;
+	TSMapElement *elsebranch;
+	bool		match;	/* If false, NO MATCH is used */
+} TSMapCase;
+
 /* ----------------
- *		compiler constants for pg_ts_config_map
+ *		Compiler constants for pg_ts_config_map
  * ----------------
  */
-#define Natts_pg_ts_config_map				4
+#define Natts_pg_ts_config_map				3
 #define Anum_pg_ts_config_map_mapcfg		1
 #define Anum_pg_ts_config_map_maptokentype	2
-#define Anum_pg_ts_config_map_mapseqno		3
-#define Anum_pg_ts_config_map_mapdict		4
+#define Anum_pg_ts_config_map_mapdicts		3
+
+/* ----------------
+ *		Dictionary map operators
+ * ----------------
+ */
+#define TSMAP_OP_MAP			1
+#define TSMAP_OP_UNION			2
+#define TSMAP_OP_EXCEPT			3
+#define TSMAP_OP_INTERSECT		4
+
+/* ----------------
+ *		TSMapElement object types
+ * ----------------
+ */
+#define TSMAP_EXPRESSION	1
+#define TSMAP_CASE			2
+#define TSMAP_DICTIONARY	3
+#define TSMAP_KEEP			4
 
 /* ----------------
  *		initial contents of pg_ts_config_map
  * ----------------
  */
 
-DATA(insert ( 3748	1	1	3765 ));
-DATA(insert ( 3748	2	1	3765 ));
-DATA(insert ( 3748	3	1	3765 ));
-DATA(insert ( 3748	4	1	3765 ));
-DATA(insert ( 3748	5	1	3765 ));
-DATA(insert ( 3748	6	1	3765 ));
-DATA(insert ( 3748	7	1	3765 ));
-DATA(insert ( 3748	8	1	3765 ));
-DATA(insert ( 3748	9	1	3765 ));
-DATA(insert ( 3748	10	1	3765 ));
-DATA(insert ( 3748	11	1	3765 ));
-DATA(insert ( 3748	15	1	3765 ));
-DATA(insert ( 3748	16	1	3765 ));
-DATA(insert ( 3748	17	1	3765 ));
-DATA(insert ( 3748	18	1	3765 ));
-DATA(insert ( 3748	19	1	3765 ));
-DATA(insert ( 3748	20	1	3765 ));
-DATA(insert ( 3748	21	1	3765 ));
-DATA(insert ( 3748	22	1	3765 ));
+DATA(insert ( 3748	1	"[3765]" ));
+DATA(insert ( 3748	2	"[3765]" ));
+DATA(insert ( 3748	3	"[3765]" ));
+DATA(insert ( 3748	4	"[3765]" ));
+DATA(insert ( 3748	5	"[3765]" ));
+DATA(insert ( 3748	6	"[3765]" ));
+DATA(insert ( 3748	7	"[3765]" ));
+DATA(insert ( 3748	8	"[3765]" ));
+DATA(insert ( 3748	9	"[3765]" ));
+DATA(insert ( 3748	10	"[3765]" ));
+DATA(insert ( 3748	11	"[3765]" ));
+DATA(insert ( 3748	15	"[3765]" ));
+DATA(insert ( 3748	16	"[3765]" ));
+DATA(insert ( 3748	17	"[3765]" ));
+DATA(insert ( 3748	18	"[3765]" ));
+DATA(insert ( 3748	19	"[3765]" ));
+DATA(insert ( 3748	20	"[3765]" ));
+DATA(insert ( 3748	21	"[3765]" ));
+DATA(insert ( 3748	22	"[3765]" ));
 
 #endif							/* PG_TS_CONFIG_MAP_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index c5b5115f5b..63dd5dcb3a 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -381,6 +381,9 @@ typedef enum NodeTag
 	T_CreateEnumStmt,
 	T_CreateRangeStmt,
 	T_AlterEnumStmt,
+	T_DictMapExprElem,
+	T_DictMapElem,
+	T_DictMapCase,
 	T_AlterTSDictionaryStmt,
 	T_AlterTSConfigurationStmt,
 	T_CreateFdwStmt,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 2eaa6b2774..f4593fbdf2 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3392,6 +3392,39 @@ typedef enum AlterTSConfigType
 	ALTER_TSCONFIG_DROP_MAPPING
 } AlterTSConfigType;
 
+typedef enum DictMapElemType
+{
+	DICT_MAP_CASE,
+	DICT_MAP_EXPRESSION,
+	DICT_MAP_KEEP,
+	DICT_MAP_DICTIONARY,
+	DICT_MAP_DICTIONARY_LIST
+} DictMapElemType;
+
+typedef struct DictMapElem
+{
+	NodeTag		type;
+	int8		kind;			/* See DictMapElemType */
+	void	   *data;			/* Type should be detected by kind value */
+} DictMapElem;
+
+typedef struct DictMapExprElem
+{
+	NodeTag		type;
+	DictMapElem *left;
+	DictMapElem *right;
+	int8		oper;
+} DictMapExprElem;
+
+typedef struct DictMapCase
+{
+	NodeTag		type;
+	struct DictMapElem *condition;
+	struct DictMapElem *command;
+	struct DictMapElem *elsebranch;
+	bool		match;
+} DictMapCase;
+
 typedef struct AlterTSConfigurationStmt
 {
 	NodeTag		type;
@@ -3404,6 +3437,7 @@ typedef struct AlterTSConfigurationStmt
 	 */
 	List	   *tokentype;		/* list of Value strings */
 	List	   *dicts;			/* list of list of Value strings */
+	DictMapElem *dict_map;
 	bool		override;		/* if true - remove old variant */
 	bool		replace;		/* if true - replace dictionary by another */
 	bool		missing_ok;		/* for DROP - skip error if missing? */
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index a932400058..b409f0c02b 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -219,6 +219,7 @@ PG_KEYWORD("is", IS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isnull", ISNULL, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isolation", ISOLATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("join", JOIN, TYPE_FUNC_NAME_KEYWORD)
+PG_KEYWORD("keep", KEEP, RESERVED_KEYWORD)
 PG_KEYWORD("key", KEY, UNRESERVED_KEYWORD)
 PG_KEYWORD("label", LABEL, UNRESERVED_KEYWORD)
 PG_KEYWORD("language", LANGUAGE, UNRESERVED_KEYWORD)
@@ -241,6 +242,7 @@ PG_KEYWORD("location", LOCATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("lock", LOCK_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("locked", LOCKED, UNRESERVED_KEYWORD)
 PG_KEYWORD("logged", LOGGED, UNRESERVED_KEYWORD)
+PG_KEYWORD("map", MAP, UNRESERVED_KEYWORD)
 PG_KEYWORD("mapping", MAPPING, UNRESERVED_KEYWORD)
 PG_KEYWORD("match", MATCH, UNRESERVED_KEYWORD)
 PG_KEYWORD("materialized", MATERIALIZED, UNRESERVED_KEYWORD)
diff --git a/src/include/tsearch/ts_cache.h b/src/include/tsearch/ts_cache.h
index abff0fdfcc..fe1e7bd204 100644
--- a/src/include/tsearch/ts_cache.h
+++ b/src/include/tsearch/ts_cache.h
@@ -14,6 +14,7 @@
 #define TS_CACHE_H
 
 #include "utils/guc.h"
+#include "catalog/pg_ts_config_map.h"
 
 
 /*
@@ -66,6 +67,7 @@ typedef struct
 {
 	int			len;
 	Oid		   *dictIds;
+	int32	   *dictOptions;
 } ListDictionary;
 
 typedef struct
@@ -77,7 +79,7 @@ typedef struct
 	Oid			prsId;
 
 	int			lenmap;
-	ListDictionary *map;
+	TSMapElement **map;
 } TSConfigCacheEntry;
 
 
diff --git a/src/include/tsearch/ts_public.h b/src/include/tsearch/ts_public.h
index 94ba7fcb20..7230968bfa 100644
--- a/src/include/tsearch/ts_public.h
+++ b/src/include/tsearch/ts_public.h
@@ -115,6 +115,7 @@ typedef struct
 #define TSL_ADDPOS		0x01
 #define TSL_PREFIX		0x02
 #define TSL_FILTER		0x04
+#define TSL_MULTI		0x08
 
 /*
  * Struct for supporting complex dictionaries like thesaurus.
diff --git a/src/test/regress/expected/oidjoins.out b/src/test/regress/expected/oidjoins.out
index 234b44fdf2..40029f396a 100644
--- a/src/test/regress/expected/oidjoins.out
+++ b/src/test/regress/expected/oidjoins.out
@@ -1081,14 +1081,6 @@ WHERE	mapcfg != 0 AND
 ------+--------
 (0 rows)
 
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
- ctid | mapdict 
-------+---------
-(0 rows)
-
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/expected/tsdicts.out b/src/test/regress/expected/tsdicts.out
index 0744ef803b..f7d966f48f 100644
--- a/src/test/regress/expected/tsdicts.out
+++ b/src/test/regress/expected/tsdicts.out
@@ -420,6 +420,105 @@ SELECT ts_lexize('thesaurus', 'one');
  {1}
 (1 row)
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+SELECT to_tsvector('english_union', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'books');
+    to_tsvector     
+--------------------
+ 'book':1 'books':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+SELECT to_tsvector('english_intersect', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'books');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'booking');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+SELECT to_tsvector('english_except', 'book');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_except', 'books');
+ to_tsvector 
+-------------
+ 'books':1
+(1 row)
+
+SELECT to_tsvector('english_except', 'booking');
+ to_tsvector 
+-------------
+ 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+SELECT to_tsvector('english_branches', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -580,3 +679,55 @@ SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a
  'card':3,10 'invit':2,9 'like':6 'look':5 'order':1,8
 (1 row)
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                                     to_tsvector                                      
+--------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+              to_tsvector              
+---------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                     to_tsvector                     
+-----------------------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5 'supernova':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                              to_tsvector                               
+------------------------------------------------------------------------
+ '1987a':6 'mysterious':2 'of':4 'rings':3 'sn':5 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+      to_tsvector       
+------------------------
+ '12':1 'one':1 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+            to_tsvector            
+-----------------------------------
+ '123':1 'one':1 'three':3 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+           to_tsvector           
+---------------------------------
+ '12':1 'four':3 'one':1 'two':2
+(1 row)
+
diff --git a/src/test/regress/expected/tsearch.out b/src/test/regress/expected/tsearch.out
index d63fb12f1d..c0e9fc5c8f 100644
--- a/src/test/regress/expected/tsearch.out
+++ b/src/test/regress/expected/tsearch.out
@@ -36,11 +36,11 @@ WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 -----+---------
 (0 rows)
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
- mapcfg | maptokentype | mapseqno 
---------+--------------+----------
+WHERE mapcfg = 0;
+ mapcfg | maptokentype 
+--------+--------------
 (0 rows)
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
@@ -51,8 +51,8 @@ RIGHT JOIN pg_ts_config_map AS m
     ON (tt.cfgid=m.mapcfg AND tt.tokid=m.maptokentype)
 WHERE
     tt.cfgid IS NULL OR tt.tokid IS NULL;
- cfgid | tokid | mapcfg | maptokentype | mapseqno | mapdict 
--------+-------+--------+--------------+----------+---------
+ cfgid | tokid | mapcfg | maptokentype | mapdicts 
+-------+-------+--------+--------------+----------
 (0 rows)
 
 -- test basic text search behavior without indexes, then with
@@ -567,55 +567,55 @@ SELECT length(to_tsvector('english', '345 qwe@efd.r '' http://www.com/ http://ae
 
 -- ts_debug
 SELECT * from ts_debug('english', '<myns:foo-bar_baz.blurfl>abc&nm1;def&#xa9;ghi&#245;jkl</myns:foo-bar_baz.blurfl>');
-   alias   |   description   |           token            |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+----------------------------+----------------+--------------+---------
- tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |              | 
- asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem | {abc}
- entity    | XML entity      | &nm1;                      | {}             |              | 
- asciiword | Word, all ASCII | def                        | {english_stem} | english_stem | {def}
- entity    | XML entity      | &#xa9;                     | {}             |              | 
- asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem | {ghi}
- entity    | XML entity      | &#245;                     | {}             |              | 
- asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem | {jkl}
- tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |              | 
+   alias   |   description   |           token            |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+----------------------------+----------------+---------------+--------------+---------
+ tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |               |              | 
+ asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem  | english_stem | {abc}
+ entity    | XML entity      | &nm1;                      | {}             |               |              | 
+ asciiword | Word, all ASCII | def                        | {english_stem} | english_stem  | english_stem | {def}
+ entity    | XML entity      | &#xa9;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem  | english_stem | {ghi}
+ entity    | XML entity      | &#245;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem  | english_stem | {jkl}
+ tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |               |              | 
 (9 rows)
 
 -- check parsing of URLs
 SELECT * from ts_debug('english', 'http://www.harewoodsolutions.co.uk/press.aspx</span>');
-  alias   |  description  |                 token                  | dictionaries | dictionary |                 lexemes                  
-----------+---------------+----------------------------------------+--------------+------------+------------------------------------------
- protocol | Protocol head | http://                                | {}           |            | 
- url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple     | {www.harewoodsolutions.co.uk/press.aspx}
- host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple     | {www.harewoodsolutions.co.uk}
- url_path | URL path      | /press.aspx                            | {simple}     | simple     | {/press.aspx}
- tag      | XML tag       | </span>                                | {}           |            | 
+  alias   |  description  |                 token                  | dictionaries | configuration | command |                 lexemes                  
+----------+---------------+----------------------------------------+--------------+---------------+---------+------------------------------------------
+ protocol | Protocol head | http://                                | {}           |               |         | 
+ url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk/press.aspx}
+ host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk}
+ url_path | URL path      | /press.aspx                            | {simple}     | simple        | simple  | {/press.aspx}
+ tag      | XML tag       | </span>                                | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
-  alias   |  description  |           token            | dictionaries | dictionary |           lexemes            
-----------+---------------+----------------------------+--------------+------------+------------------------------
- protocol | Protocol head | http://                    | {}           |            | 
- url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple     | {aew.wer0c.ewr/id?ad=qwe&dw}
- host     | Host          | aew.wer0c.ewr              | {simple}     | simple     | {aew.wer0c.ewr}
- url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple     | {/id?ad=qwe&dw}
- tag      | XML tag       | <span>                     | {}           |            | 
+  alias   |  description  |           token            | dictionaries | configuration | command |           lexemes            
+----------+---------------+----------------------------+--------------+---------------+---------+------------------------------
+ protocol | Protocol head | http://                    | {}           |               |         | 
+ url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple        | simple  | {aew.wer0c.ewr/id?ad=qwe&dw}
+ host     | Host          | aew.wer0c.ewr              | {simple}     | simple        | simple  | {aew.wer0c.ewr}
+ url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple        | simple  | {/id?ad=qwe&dw}
+ tag      | XML tag       | <span>                     | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
-  alias   |  description  |        token         | dictionaries | dictionary |        lexemes         
-----------+---------------+----------------------+--------------+------------+------------------------
- protocol | Protocol head | http://              | {}           |            | 
- url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple     | {5aew.werc.ewr:8100/?}
- host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path      | /?                   | {simple}     | simple     | {/?}
+  alias   |  description  |        token         | dictionaries | configuration | command |        lexemes         
+----------+---------------+----------------------+--------------+---------------+---------+------------------------
+ protocol | Protocol head | http://              | {}           |               |         | 
+ url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?}
+ host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path      | /?                   | {simple}     | simple        | simple  | {/?}
 (4 rows)
 
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
-  alias   | description |         token          | dictionaries | dictionary |         lexemes          
-----------+-------------+------------------------+--------------+------------+--------------------------
- url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple     | {5aew.werc.ewr:8100/?xx}
- host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path    | /?xx                   | {simple}     | simple     | {/?xx}
+  alias   | description |         token          | dictionaries | configuration | command |         lexemes          
+----------+-------------+------------------------+--------------+---------------+---------+--------------------------
+ url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?xx}
+ host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path    | /?xx                   | {simple}     | simple        | simple  | {/?xx}
 (3 rows)
 
 SELECT token, alias,
diff --git a/src/test/regress/sql/oidjoins.sql b/src/test/regress/sql/oidjoins.sql
index fcf9990f6b..320e220d06 100644
--- a/src/test/regress/sql/oidjoins.sql
+++ b/src/test/regress/sql/oidjoins.sql
@@ -541,10 +541,6 @@ SELECT	ctid, mapcfg
 FROM	pg_catalog.pg_ts_config_map fk
 WHERE	mapcfg != 0 AND
 	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_config pk WHERE pk.oid = fk.mapcfg);
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/sql/tsdicts.sql b/src/test/regress/sql/tsdicts.sql
index a5a569e1ad..3f7df283cb 100644
--- a/src/test/regress/sql/tsdicts.sql
+++ b/src/test/regress/sql/tsdicts.sql
@@ -117,6 +117,57 @@ CREATE TEXT SEARCH DICTIONARY thesaurus (
 
 SELECT ts_lexize('thesaurus', 'one');
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+
+SELECT to_tsvector('english_union', 'book');
+SELECT to_tsvector('english_union', 'books');
+SELECT to_tsvector('english_union', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+
+SELECT to_tsvector('english_intersect', 'book');
+SELECT to_tsvector('english_intersect', 'books');
+SELECT to_tsvector('english_intersect', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+
+SELECT to_tsvector('english_except', 'book');
+SELECT to_tsvector('english_except', 'books');
+SELECT to_tsvector('english_except', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+
+SELECT to_tsvector('english_branches', 'book');
+SELECT to_tsvector('english_branches', 'books');
+SELECT to_tsvector('english_branches', 'booking');
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -188,3 +239,25 @@ ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR
 SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one');
 SELECT to_tsvector('thesaurus_tst', 'Supernovae star is very new star and usually called supernovae (abbreviation SN)');
 SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a tickets');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two four');
diff --git a/src/test/regress/sql/tsearch.sql b/src/test/regress/sql/tsearch.sql
index 1c8520b3e9..6f8af63c1a 100644
--- a/src/test/regress/sql/tsearch.sql
+++ b/src/test/regress/sql/tsearch.sql
@@ -26,9 +26,9 @@ SELECT oid, cfgname
 FROM pg_ts_config
 WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
+WHERE mapcfg = 0;
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
 SELECT * FROM
0001-flexible-fts-configuration-v4-readme.mdtext/markdownDownload
#13Arthur Zakirov
a.zakirov@postgrespro.ru
In reply to: Aleksandr Parfenov (#12)
Re: [HACKERS] Flexible configuration for full-text search

On Mon, Dec 25, 2017 at 05:15:07PM +0300, Aleksandr Parfenov wrote:

In the current version of the patch, configurations written in old
syntax are rewritten into the same configuration in the new syntax.
Since new syntax doesn't support a TSL_FILTER, it was removed from the
documentation. It is possible to store configurations written in old
syntax in a special way and simulate a TSL_FILTER behavior for them.
But it will lead to maintenance of two different behavior of the FTS
depends on a version of the syntax used during configuration. Do you
think we should keep both behaviors?

Is I understood users need to rewrite their configurations if they use unaccent dictionary, for example.
It is not good I think. Users will be upset about that if they use only old configuration and they don't need new configuration.

From my point of view it is necessary to keep old configuration syntax.

Columns' 'dictionaries' and 'dictionary' type were changed to text
because after the patch the configuration may be not a plain array of
dictionaries but a complex expression tree. In the column
'dictionaries' the result is textual representation of configuration
and it is the same as a result of \dF+ description of the configuration.

Oh, I understood.

I decide to rename newly added column to 'configuration' and keep
column 'dictionaries' with an array of all dictionaries used in
configuration (no matter how). Also, I fixed a bug in 'command' output
of the ts_debug in some cases.

Maybe it would be better to keep the 'dictionary' column name? Is there a reason why it was renamed to 'command'?

Additionally, I added some examples to documentation regarding
multilingual search and combination of exact and linguistic-aware
search and fixed typos.

Great!

--
Arthur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

#14Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Arthur Zakirov (#13)
Re: [HACKERS] Flexible configuration for full-text search

On Tue, 26 Dec 2017 13:51:03 +0300
Arthur Zakirov <a.zakirov@postgrespro.ru> wrote:

On Mon, Dec 25, 2017 at 05:15:07PM +0300, Aleksandr Parfenov wrote:

Is I understood users need to rewrite their configurations if they
use unaccent dictionary, for example. It is not good I think. Users
will be upset about that if they use only old configuration and they
don't need new configuration.

From my point of view it is necessary to keep old configuration
syntax.

I see your point. I will rework a patch to keep the backward
compatibility and return the TSL_FILTER entry into documentation.

I decide to rename newly added column to 'configuration' and keep
column 'dictionaries' with an array of all dictionaries used in
configuration (no matter how). Also, I fixed a bug in 'command'
output of the ts_debug in some cases.

Maybe it would be better to keep the 'dictionary' column name? Is
there a reason why it was renamed to 'command'?

I changed the name bacause it may contain more than one dictionary
interconnected via operators (e.g. 'english_stem UNION simple') and
the word 'dictionary' doesn't fully describe a content of the column
now. Also, a type of column was changed from regdictionary to text in
order to put operators into the output.

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

#15Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Aleksandr Parfenov (#1)
1 attachment(s)
Re: [HACKERS] Flexible configuration for full-text search

Greetings,

According to http://commitfest.cputube.org/ the patch is not applicable.

Updated version of the patch in the attachment.

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

Attachments:

0001-flexible-fts-configuration-v6.patchtext/x-patchDownload
diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml
index ebe0b94..a1f483e 100644
--- a/doc/src/sgml/ref/alter_tsconfig.sgml
+++ b/doc/src/sgml/ref/alter_tsconfig.sgml
@@ -22,8 +22,12 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING REPLACE <replaceable class="parameter">old_dictionary</replaceable> WITH <replaceable class="parameter">new_dictionary</replaceable>
@@ -89,6 +93,17 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
    </varlistentry>
 
    <varlistentry>
+    <term><replaceable class="parameter">config</replaceable></term>
+    <listitem>
+     <para>
+      The dictionaries tree expression. The dictionary expression
+      is a triple of condition/command/else that define way to process
+      the text. The <literal>ELSE</literal> part is optional.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry>
     <term><replaceable class="parameter">old_dictionary</replaceable></term>
     <listitem>
      <para>
@@ -133,7 +148,7 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
      </para>
     </listitem>
    </varlistentry>
- </variablelist>
+  </variablelist>
 
   <para>
    The <literal>ADD MAPPING FOR</literal> form installs a list of dictionaries to be
@@ -155,6 +170,53 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
  </refsect1>
 
  <refsect1>
+  <title>Dictionaries Map Config</title>
+
+  <refsect2>
+   <title>Format</title>
+   <para>
+    Formally <replaceable class="parameter">config</replaceable> is one of:
+   </para>
+   <programlisting>
+    * dictionary_name
+
+    * config { UNION | INTERSECT | EXCEPT | MAP } config
+
+    * CASE config
+        WHEN [ NO ] MATCH THEN { KEEP | config }
+        [ ELSE config ]
+      END
+   </programlisting>
+  </refsect2>
+
+  <refsect2>
+   <title>Description</title>
+   <para>
+    <replaceable class="parameter">config</replaceable> can be used
+    in three different formats. The most simple format is name of dictionary to
+    use for tokens processing.
+   </para>
+   <para>
+    In order to use more than one dictionary
+    simultaneously user should interconnect dictionaries by operators. Operators
+    <literal>UNION</literal>, <literal>EXCEPT</literal> and
+    <literal>INTERSECT</literal> have same meaning as in operations on sets.
+    Special operator <literal>MAP</literal> gets output of left subexpression
+    and uses it as an input to right subexpression.
+   </para>
+   <para>
+    The third format of <replaceable class="parameter">config</replaceable> is similar to
+    <literal>CASE/WHEN/THEN/ELSE</literal> structure. It's consists of three
+    replaceable parts. First one is configuration which is used to construct lexemes set
+    for matching condition. If the condition is triggered, the command is executed.
+    Use command <literal>KEEP</literal> to avoid repeating of the same
+    configuration in condition and command part. However, command may differ from
+    the condition. The <literal>ELSE</literal> branch is executed otherwise.
+   </para>
+  </refsect2>
+ </refsect1>
+
+ <refsect1>
   <title>Examples</title>
 
   <para>
@@ -167,6 +229,34 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
 ALTER TEXT SEARCH CONFIGURATION my_config
   ALTER MAPPING REPLACE english WITH swedish;
 </programlisting>
+
+  <para>
+   Next example shows how to analyse documents in both English and German languages.
+   <literal>english_hunspell</literal> and <literal>german_hunspell</literal>
+   return result only if a word is recognized. Otherwise, stemmer dictionaries
+   are used to process a token.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH
+   CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+    UNION
+   CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+    In order to combine search for both exact and processed forms the vector
+    should contain lexemes produced by <literal>simple</literal> for exact form
+    of the word as well as lexemes produced by linguistic-aware dictionary
+    (e.g. <literal>english_stem</literal>) for processed forms.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml
index 4dc52ec..f719fa9 100644
--- a/doc/src/sgml/textsearch.sgml
+++ b/doc/src/sgml/textsearch.sgml
@@ -732,10 +732,11 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     The <function>to_tsvector</function> function internally calls a parser
     which breaks the document text into tokens and assigns a type to
     each token.  For each token, a list of
-    dictionaries (<xref linkend="textsearch-dictionaries"/>) is consulted,
-    where the list can vary depending on the token type.  The first dictionary
-    that <firstterm>recognizes</firstterm> the token emits one or more normalized
-    <firstterm>lexemes</firstterm> to represent the token.  For example,
+    condition/command pairs is consulted, where the list can vary depending
+    on the token type, condition and command are expressions on dictionaries
+    with matching clause in condition(<xref linkend="textsearch-dictionaries"/>).
+    The first command combined with true-resulted condition emits one or more normalized
+    <firstterm>lexemes</firstterm> to represent the token. For example,
     <literal>rats</literal> became <literal>rat</literal> because one of the
     dictionaries recognized that the word <literal>rats</literal> is a plural
     form of <literal>rat</literal>.  Some words are recognized as
@@ -743,7 +744,7 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     causes them to be ignored since they occur too frequently to be useful in
     searching.  In our example these are
     <literal>a</literal>, <literal>on</literal>, and <literal>it</literal>.
-    If no dictionary in the list recognizes the token then it is also ignored.
+    If none of conditions is <literal>true</literal> the token is ignored.
     In this example that happened to the punctuation sign <literal>-</literal>
     because there are in fact no dictionaries assigned for its token type
     (<literal>Space symbols</literal>), meaning space tokens will never be
@@ -2231,8 +2232,8 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
      <para>
       a single lexeme with the <literal>TSL_FILTER</literal> flag set, to replace
       the original token with a new token to be passed to subsequent
-      dictionaries (a dictionary that does this is called a
-      <firstterm>filtering dictionary</firstterm>)
+      dictionaries in a comma-separated syntax (a dictionary that does this
+      is called a <firstterm>filtering dictionary</firstterm>)
      </para>
     </listitem>
     <listitem>
@@ -2264,38 +2265,126 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
    type that the parser can return, a separate list of dictionaries is
    specified by the configuration.  When a token of that type is found
    by the parser, each dictionary in the list is consulted in turn,
-   until some dictionary recognizes it as a known word.  If it is identified
-   as a stop word, or if no dictionary recognizes the token, it will be
-   discarded and not indexed or searched for.
-   Normally, the first dictionary that returns a non-<literal>NULL</literal>
-   output determines the result, and any remaining dictionaries are not
-   consulted; but a filtering dictionary can replace the given word
-   with a modified word, which is then passed to subsequent dictionaries.
+   until command is not selected based on its condition. If none of cases is
+   selected token will be discarded and not indexed or searched for.
   </para>
 
   <para>
-   The general rule for configuring a list of dictionaries
-   is to place first the most narrow, most specific dictionary, then the more
-   general dictionaries, finishing with a very general dictionary, like
+   A tree of cases is described as condition/command/else triples. Each
+   condition is evaluated in order to select appropriate command to generate
+   resulted set of lexemes.
+  </para>
+
+  <para>
+   A condition is an expression with dictionaries used as operands and
+   basic set operators <literal>UNION</literal>, <literal>EXCEPT</literal>, <literal>INTERSECT</literal>
+   and special operator <literal>MAP</literal>.
+   Special operator <literal>MAP</literal> use output of left subexpression as
+   input for right subexpression.
+  </para>
+
+  <para>
+    Rules to write command are same as for condition with additional keyword
+    <literal>KEEP</literal> considered to use the result of the condition as an output.
+  </para>
+
+  <para>
+   A comma-separated list of dictionaries is a simplified variant of text
+   search configuration. Each dictionary consulted to process a token and first
+   non-<literal>NULL</literal> output is accepted as a processing result.
+  </para>
+
+  <para>
+   The general rule for configuring tokens processing
+   is to place first case with the most narrow, most specific dictionary, then the more
+   general dictionaries, finishing with a very general dictionaries, like
    a <application>Snowball</application> stemmer or <literal>simple</literal>, which
-   recognizes everything.  For example, for an astronomy-specific search
+   recognizes everything. For example, for an astronomy-specific search
    (<literal>astro_en</literal> configuration) one could bind token type
    <type>asciiword</type> (ASCII word) to a synonym dictionary of astronomical
    terms, a general English dictionary and a <application>Snowball</application> English
-   stemmer:
+   stemmer in comma-separated variant of mapping:
+  </para>
 
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION astro_en
     ADD MAPPING FOR asciiword WITH astrosyn, english_ispell, english_stem;
 </programlisting>
+
+  <para>
+   Another example is a configuration for both English and German languages via
+   operator-separated variant of mapping:
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION multi_en_de
+    ADD MAPPING FOR asciiword, word WITH
+        CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+         UNION
+        CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+   This configuration provides an ability to search on collection of multilingual
+   documents without specifying language:
+  </para>
+
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'lack');
+ id |                   txt
+----+-----------------------------------------
+  2 | with old stars and lacking gas and dust
+
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'beginnen');
+ id |                 txt
+----+-------------------------------------
+  1 | Das geschah zu Beginn dieses Monats
+</programlisting>
+
+  <para>
+   A combination of stemmer dictionary with <literal>simple</literal> one may be used to mix
+   search for exact form of one word and linguistic search for others.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION exact_and_linguistic
+    ADD MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
+  <para>
+   In the following example a simple dictionary is used to prevent words from normalization in query.
   </para>
 
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Supernova star'),
+                              (2, 'Supernova stars'))
+SELECT * FROM docs WHERE to_tsvector('exact_and_linguistic', txt) @@ (to_tsquery('simple', 'stars') &amp;&amp; to_tsquery('english', 'supernovae'));
+ id |       txt       
+----+-----------------
+  2 | Supernova stars
+</programlisting>
+
+   <caution>
+    <para>
+     Due to lack of information about origin of each lexeme in <literal>tsvector</literal> may
+     lead to false-positive triggers in case of stemmed form being used as exact form in a query.
+    </para>
+   </caution>
+
   <para>
-   A filtering dictionary can be placed anywhere in the list, except at the
-   end where it'd be useless.  Filtering dictionaries are useful to partially
+   Filtering dictionaries are useful to partially
    normalize words to simplify the task of later dictionaries.  For example,
    a filtering dictionary could be used to remove accents from accented
    letters, as is done by the <xref linkend="unaccent"/> module.
+   Filter dictionary should be placed at left of <literal>MAP</literal>
+   operator. If filter dictionary returns <literal>NULL</literal> it pass initial token
+   to the right subexpression.
   </para>
 
   <sect2 id="textsearch-stopwords">
@@ -2462,9 +2551,9 @@ SELECT ts_lexize('public.simple_dict','The');
 
 <screen>
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | Paris | {english_stem} | english_stem  | english_stem | {pari}
 
 CREATE TEXT SEARCH DICTIONARY my_synonym (
     TEMPLATE = synonym,
@@ -2476,9 +2565,12 @@ ALTER TEXT SEARCH CONFIGURATION english
     WITH my_synonym, english_stem;
 
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |       dictionaries        | dictionary | lexemes 
------------+-----------------+-------+---------------------------+------------+---------
- asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris}
+   alias   |   description   | token |       dictionaries        |                configuration                |  command   | lexemes 
+-----------+-----------------+-------+---------------------------+---------------------------------------------+------------+---------
+ asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | CASE my_synonym WHEN MATCH THEN KEEP       +| my_synonym | {paris}
+           |                 |       |                           | ELSE CASE english_stem WHEN MATCH THEN KEEP+|            | 
+           |                 |       |                           | END                                        +|            | 
+           |                 |       |                           | END                                         |            | 
 </screen>
    </para>
 
@@ -3107,6 +3199,21 @@ CREATE TEXT SEARCH DICTIONARY english_ispell (
 ALTER TEXT SEARCH CONFIGURATION pg
     ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
                       word, hword, hword_part
+    WITH 
+      CASE pg_dict WHEN MATCH THEN KEEP
+      ELSE
+          CASE english_ispell WHEN MATCH THEN KEEP
+          ELSE english_stem
+          END
+      END;
+</programlisting>
+
+    Or use alternative comma-separated syntax:
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION pg
+    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
+                      word, hword, hword_part
     WITH pg_dict, english_ispell, english_stem;
 </programlisting>
 
@@ -3182,7 +3289,8 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
          OUT <replaceable class="parameter">description</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">token</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">dictionaries</replaceable> <type>regdictionary[]</type>,
-         OUT <replaceable class="parameter">dictionary</replaceable> <type>regdictionary</type>,
+         OUT <replaceable class="parameter">configuration</replaceable> <type>text</type>,
+         OUT <replaceable class="parameter">command</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)
          returns setof record
 </synopsis>
@@ -3226,14 +3334,20 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionary</replaceable> <type>regdictionary</type> &mdash; the dictionary
-       that recognized the token, or <literal>NULL</literal> if none did
+       <replaceable>configuration</replaceable> <type>text</type> &mdash; the
+       configuration defined for this token type
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       <replaceable>command</replaceable> <type>text</type> &mdash; the command that describes
+       the way the output was produced
       </para>
      </listitem>
      <listitem>
       <para>
        <replaceable>lexemes</replaceable> <type>text[]</type> &mdash; the lexeme(s) produced
-       by the dictionary that recognized the token, or <literal>NULL</literal> if
+       by the command selected according conditions, or <literal>NULL</literal> if
        none did; an empty array (<literal>{}</literal>) means it was recognized as a
        stop word
       </para>
@@ -3246,32 +3360,32 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
 
 <screen>
 SELECT * FROM ts_debug('english','a fat  cat sat on a mat - it ate a fat rats');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | cat   | {english_stem} | english_stem | {cat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | sat   | {english_stem} | english_stem | {sat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | on    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | mat   | {english_stem} | english_stem | {mat}
- blank     | Space symbols   |       | {}             |              | 
- blank     | Space symbols   | -     | {}             |              | 
- asciiword | Word, all ASCII | it    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | ate   | {english_stem} | english_stem | {ate}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | rats  | {english_stem} | english_stem | {rat}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | cat   | {english_stem} | english_stem  | english_stem | {cat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | sat   | {english_stem} | english_stem  | english_stem | {sat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | on    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | mat   | {english_stem} | english_stem  | english_stem | {mat}
+ blank     | Space symbols   |       |                |               |              | 
+ blank     | Space symbols   | -     |                |               |              | 
+ asciiword | Word, all ASCII | it    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | ate   | {english_stem} | english_stem  | english_stem | {ate}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | rats  | {english_stem} | english_stem  | english_stem | {rat}
 </screen>
   </para>
 
@@ -3297,13 +3411,22 @@ ALTER TEXT SEARCH CONFIGURATION public.english
 
 <screen>
 SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
-   alias   |   description   |    token    |         dictionaries          |   dictionary   |   lexemes   
------------+-----------------+-------------+-------------------------------+----------------+-------------
- asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | english_ispell | {}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | english_ispell | {bright}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | english_stem   | {supernova}
+   alias   |   description   |    token    |         dictionaries          |                configuration                |     command      |   lexemes   
+-----------+-----------------+-------------+-------------------------------+---------------------------------------------+------------------+-------------
+ asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {bright}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_stem     | {supernova}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
 </screen>
 
   <para>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 5652e9e..f9fdf4d 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -944,55 +944,14 @@ GRANT SELECT (subdbid, subname, subowner, subenabled, subslotname, subpublicatio
 -- Tsearch debug function.  Defined here because it'd be pretty unwieldy
 -- to put it into pg_proc.h
 
-CREATE FUNCTION ts_debug(IN config regconfig, IN document text,
-    OUT alias text,
-    OUT description text,
-    OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
-    OUT lexemes text[])
-RETURNS SETOF record AS
-$$
-SELECT
-    tt.alias AS alias,
-    tt.description AS description,
-    parse.token AS token,
-    ARRAY ( SELECT m.mapdict::pg_catalog.regdictionary
-            FROM pg_catalog.pg_ts_config_map AS m
-            WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-            ORDER BY m.mapseqno )
-    AS dictionaries,
-    ( SELECT mapdict::pg_catalog.regdictionary
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS dictionary,
-    ( SELECT pg_catalog.ts_lexize(mapdict, parse.token)
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS lexemes
-FROM pg_catalog.ts_parse(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 ), $2
-    ) AS parse,
-     pg_catalog.ts_token_type(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 )
-    ) AS tt
-WHERE tt.tokid = parse.tokid
-$$
-LANGUAGE SQL STRICT STABLE PARALLEL SAFE;
-
-COMMENT ON FUNCTION ts_debug(regconfig,text) IS
-    'debug function for text search configuration';
 
 CREATE FUNCTION ts_debug(IN document text,
     OUT alias text,
     OUT description text,
     OUT token text,
     OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
+    OUT configuration text,
+    OUT command text,
     OUT lexemes text[])
 RETURNS SETOF record AS
 $$
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index bf06ed9..9fedcf7 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -39,9 +39,12 @@
 #include "nodes/makefuncs.h"
 #include "parser/parse_func.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_public.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/jsonb.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "utils/syscache.h"
@@ -935,11 +938,22 @@ makeConfigurationDependencies(HeapTuple tuple, bool removeOld,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			TSMapElement *mapdicts = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			Oid		   *dictionaryOids = TSMapGetDictionaries(mapdicts);
+			Oid		   *currentOid = dictionaryOids;
 
-			referenced.classId = TSDictionaryRelationId;
-			referenced.objectId = cfgmap->mapdict;
-			referenced.objectSubId = 0;
-			add_exact_object_address(&referenced, addrs);
+			while (*currentOid != InvalidOid)
+			{
+				referenced.classId = TSDictionaryRelationId;
+				referenced.objectId = *currentOid;
+				referenced.objectSubId = 0;
+				add_exact_object_address(&referenced, addrs);
+
+				currentOid++;
+			}
+
+			pfree(dictionaryOids);
+			TSMapElementFree(mapdicts);
 		}
 
 		systable_endscan(scan);
@@ -1091,8 +1105,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			mapvalues[Anum_pg_ts_config_map_mapcfg - 1] = cfgOid;
 			mapvalues[Anum_pg_ts_config_map_maptokentype - 1] = cfgmap->maptokentype;
-			mapvalues[Anum_pg_ts_config_map_mapseqno - 1] = cfgmap->mapseqno;
-			mapvalues[Anum_pg_ts_config_map_mapdict - 1] = cfgmap->mapdict;
+			mapvalues[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(&cfgmap->mapdicts);
 
 			newmaptup = heap_form_tuple(mapRel->rd_att, mapvalues, mapnulls);
 
@@ -1195,7 +1208,7 @@ AlterTSConfiguration(AlterTSConfigurationStmt *stmt)
 	relMap = heap_open(TSConfigMapRelationId, RowExclusiveLock);
 
 	/* Add or drop mappings */
-	if (stmt->dicts)
+	if (stmt->dicts || stmt->dict_map)
 		MakeConfigurationMapping(stmt, tup, relMap);
 	else if (stmt->tokentype)
 		DropConfigurationMapping(stmt, tup, relMap);
@@ -1271,6 +1284,55 @@ getTokenTypes(Oid prsId, List *tokennames)
 	return res;
 }
 
+static TSMapElement *
+ParseTSMapConfig(DictMapElem *elem)
+{
+	TSMapElement *result = palloc0(sizeof(TSMapElement));
+
+	if (elem->kind == DICT_MAP_CASE)
+	{
+		TSMapCase  *caseObject = palloc0(sizeof(TSMapCase));
+		DictMapCase *caseASTObject = elem->data;
+
+		caseObject->condition = ParseTSMapConfig(caseASTObject->condition);
+		caseObject->command = ParseTSMapConfig(caseASTObject->command);
+
+		if (caseASTObject->elsebranch)
+			caseObject->elsebranch = ParseTSMapConfig(caseASTObject->elsebranch);
+
+		caseObject->match = caseASTObject->match;
+
+		caseObject->condition->parent = result;
+		caseObject->command->parent = result;
+
+		result->type = TSMAP_CASE;
+		result->value.objectCase = caseObject;
+	}
+	else if (elem->kind == DICT_MAP_EXPRESSION)
+	{
+		TSMapExpression *expression = palloc0(sizeof(TSMapExpression));
+		DictMapExprElem *expressionAST = elem->data;
+
+		expression->left = ParseTSMapConfig(expressionAST->left);
+		expression->right = ParseTSMapConfig(expressionAST->right);
+		expression->operator = expressionAST->oper;
+
+		result->type = TSMAP_EXPRESSION;
+		result->value.objectExpression = expression;
+	}
+	else if (elem->kind == DICT_MAP_KEEP)
+	{
+		result->value.objectExpression = NULL;
+		result->type = TSMAP_KEEP;
+	}
+	else if (elem->kind == DICT_MAP_DICTIONARY)
+	{
+		result->value.objectDictionary = get_ts_dict_oid(elem->data, false);
+		result->type = TSMAP_DICTIONARY;
+	}
+	return result;
+}
+
 /*
  * ALTER TEXT SEARCH CONFIGURATION ADD/ALTER MAPPING
  */
@@ -1287,8 +1349,9 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	Oid			prsId;
 	int		   *tokens,
 				ntoken;
-	Oid		   *dictIds;
-	int			ndict;
+	Oid		   *dictIds = NULL;
+	int			ndict = 0;
+	TSMapElement *config = NULL;
 	ListCell   *c;
 
 	prsId = ((Form_pg_ts_config) GETSTRUCT(tup))->cfgparser;
@@ -1327,15 +1390,18 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	/*
 	 * Convert list of dictionary names to array of dict OIDs
 	 */
-	ndict = list_length(stmt->dicts);
-	dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
-	i = 0;
-	foreach(c, stmt->dicts)
+	if (stmt->dicts)
 	{
-		List	   *names = (List *) lfirst(c);
+		ndict = list_length(stmt->dicts);
+		dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
+		i = 0;
+		foreach(c, stmt->dicts)
+		{
+			List	   *names = (List *) lfirst(c);
 
-		dictIds[i] = get_ts_dict_oid(names, false);
-		i++;
+			dictIds[i] = get_ts_dict_oid(names, false);
+			i++;
+		}
 	}
 
 	if (stmt->replace)
@@ -1357,6 +1423,10 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			Datum		repl_val[Natts_pg_ts_config_map];
+			bool		repl_null[Natts_pg_ts_config_map];
+			bool		repl_repl[Natts_pg_ts_config_map];
+			HeapTuple	newtup;
 
 			/*
 			 * check if it's one of target token types
@@ -1380,25 +1450,21 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 			/*
 			 * replace dictionary if match
 			 */
-			if (cfgmap->mapdict == dictOld)
-			{
-				Datum		repl_val[Natts_pg_ts_config_map];
-				bool		repl_null[Natts_pg_ts_config_map];
-				bool		repl_repl[Natts_pg_ts_config_map];
-				HeapTuple	newtup;
-
-				memset(repl_val, 0, sizeof(repl_val));
-				memset(repl_null, false, sizeof(repl_null));
-				memset(repl_repl, false, sizeof(repl_repl));
-
-				repl_val[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictNew);
-				repl_repl[Anum_pg_ts_config_map_mapdict - 1] = true;
-
-				newtup = heap_modify_tuple(maptup,
-										   RelationGetDescr(relMap),
-										   repl_val, repl_null, repl_repl);
-				CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
-			}
+			config = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			TSMapReplaceDictionary(config, dictOld, dictNew);
+
+			memset(repl_val, 0, sizeof(repl_val));
+			memset(repl_null, false, sizeof(repl_null));
+			memset(repl_repl, false, sizeof(repl_repl));
+
+			repl_val[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
+			repl_repl[Anum_pg_ts_config_map_mapdicts - 1] = true;
+
+			newtup = heap_modify_tuple(maptup,
+									   RelationGetDescr(relMap),
+									   repl_val, repl_null, repl_repl);
+			CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
+			pfree(config);
 		}
 
 		systable_endscan(scan);
@@ -1408,24 +1474,22 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		/*
 		 * Insertion of new entries
 		 */
+		config = ParseTSMapConfig(stmt->dict_map);
+
 		for (i = 0; i < ntoken; i++)
 		{
-			for (j = 0; j < ndict; j++)
-			{
-				Datum		values[Natts_pg_ts_config_map];
-				bool		nulls[Natts_pg_ts_config_map];
+			Datum		values[Natts_pg_ts_config_map];
+			bool		nulls[Natts_pg_ts_config_map];
 
-				memset(nulls, false, sizeof(nulls));
-				values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
-				values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
-				values[Anum_pg_ts_config_map_mapseqno - 1] = Int32GetDatum(j + 1);
-				values[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictIds[j]);
+			memset(nulls, false, sizeof(nulls));
+			values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
+			values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
+			values[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
 
-				tup = heap_form_tuple(relMap->rd_att, values, nulls);
-				CatalogTupleInsert(relMap, tup);
+			tup = heap_form_tuple(relMap->rd_att, values, nulls);
+			CatalogTupleInsert(relMap, tup);
 
-				heap_freetuple(tup);
-			}
+			heap_freetuple(tup);
 		}
 	}
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index ddbbc79..c15da03 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4387,6 +4387,42 @@ _copyReassignOwnedStmt(const ReassignOwnedStmt *from)
 	return newnode;
 }
 
+static DictMapElem *
+_copyDictMapElem(const DictMapElem *from)
+{
+	DictMapElem *newnode = makeNode(DictMapElem);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(data);
+
+	return newnode;
+}
+
+static DictMapExprElem *
+_copyDictMapExprElem(const DictMapExprElem *from)
+{
+	DictMapExprElem *newnode = makeNode(DictMapExprElem);
+
+	COPY_NODE_FIELD(left);
+	COPY_NODE_FIELD(right);
+	COPY_SCALAR_FIELD(oper);
+
+	return newnode;
+}
+
+static DictMapCase *
+_copyDictMapCase(const DictMapCase *from)
+{
+	DictMapCase *newnode = makeNode(DictMapCase);
+
+	COPY_NODE_FIELD(condition);
+	COPY_NODE_FIELD(command);
+	COPY_NODE_FIELD(elsebranch);
+	COPY_SCALAR_FIELD(match);
+
+	return newnode;
+}
+
 static AlterTSDictionaryStmt *
 _copyAlterTSDictionaryStmt(const AlterTSDictionaryStmt *from)
 {
@@ -5394,6 +5430,15 @@ copyObjectImpl(const void *from)
 		case T_ReassignOwnedStmt:
 			retval = _copyReassignOwnedStmt(from);
 			break;
+		case T_DictMapExprElem:
+			retval = _copyDictMapExprElem(from);
+			break;
+		case T_DictMapElem:
+			retval = _copyDictMapElem(from);
+			break;
+		case T_DictMapCase:
+			retval = _copyDictMapCase(from);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _copyAlterTSDictionaryStmt(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 30ccc9c..11c8219 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2187,6 +2187,36 @@ _equalReassignOwnedStmt(const ReassignOwnedStmt *a, const ReassignOwnedStmt *b)
 }
 
 static bool
+_equalDictMapElem(const DictMapElem *a, const DictMapElem *b)
+{
+	COMPARE_NODE_FIELD(data);
+	COMPARE_SCALAR_FIELD(kind);
+
+	return true;
+}
+
+static bool
+_equalDictMapExprElem(const DictMapExprElem *a, const DictMapExprElem *b)
+{
+	COMPARE_NODE_FIELD(left);
+	COMPARE_NODE_FIELD(right);
+	COMPARE_SCALAR_FIELD(oper);
+
+	return true;
+}
+
+static bool
+_equalDictMapCase(const DictMapCase *a, const DictMapCase *b)
+{
+	COMPARE_NODE_FIELD(condition);
+	COMPARE_NODE_FIELD(command);
+	COMPARE_NODE_FIELD(elsebranch);
+	COMPARE_SCALAR_FIELD(match);
+
+	return true;
+}
+
+static bool
 _equalAlterTSDictionaryStmt(const AlterTSDictionaryStmt *a, const AlterTSDictionaryStmt *b)
 {
 	COMPARE_NODE_FIELD(dictname);
@@ -3532,6 +3562,15 @@ equal(const void *a, const void *b)
 		case T_ReassignOwnedStmt:
 			retval = _equalReassignOwnedStmt(a, b);
 			break;
+		case T_DictMapExprElem:
+			retval = _equalDictMapExprElem(a, b);
+			break;
+		case T_DictMapElem:
+			retval = _equalDictMapElem(a, b);
+			break;
+		case T_DictMapCase:
+			retval = _equalDictMapCase(a, b);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _equalAlterTSDictionaryStmt(a, b);
 			break;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index e42b7ca..b78cd13 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -52,6 +52,7 @@
 #include "catalog/namespace.h"
 #include "catalog/pg_am.h"
 #include "catalog/pg_trigger.h"
+#include "catalog/pg_ts_config_map.h"
 #include "commands/defrem.h"
 #include "commands/trigger.h"
 #include "nodes/makefuncs.h"
@@ -241,6 +242,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionSpec		*partspec;
 	PartitionBoundSpec	*partboundspec;
 	RoleSpec			*rolespec;
+	DictMapElem			*dmapelem;
 }
 
 %type <node>	stmt schema_stmt
@@ -308,7 +310,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <ival>	vacuum_option_list vacuum_option_elem
 %type <boolean>	opt_or_replace
 				opt_grant_grant_option opt_grant_admin_option
-				opt_nowait opt_if_exists opt_with_data
+				opt_nowait opt_if_exists opt_with_data opt_dictionary_map_no
 %type <ival>	opt_nowait_or_skip
 
 %type <list>	OptRoleList AlterOptRoleList
@@ -396,8 +398,8 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				relation_expr_list dostmt_opt_list
 				transform_element_list transform_type_list
 				TriggerTransitions TriggerReferencing
-				publication_name_list
 				vacuum_relation_list opt_vacuum_relation_list
+				publication_name_list
 
 %type <list>	group_by_list
 %type <node>	group_by_item empty_grouping_set rollup_clause cube_clause
@@ -582,6 +584,13 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>		hash_partbound partbound_datum_list range_datum_list
 %type <defelt>		hash_partbound_elem
 
+%type <ival>		dictionary_map_set_expr_operator
+%type <dmapelem>	dictionary_map_dict dictionary_map_command_expr_paren
+					dictionary_map_set_expr dictionary_map_case
+					dictionary_map_action dictionary_map
+					opt_dictionary_map_case_else dictionary_config
+					dictionary_config_comma
+
 /*
  * Non-keyword token types.  These are hard-wired into the "flex" lexer.
  * They must be listed first so that their numeric codes do not depend on
@@ -643,13 +652,14 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	JOIN
 
-	KEY
+	KEEP KEY
 
 	LABEL LANGUAGE LARGE_P LAST_P LATERAL_P
 	LEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL
 	LOCALTIME LOCALTIMESTAMP LOCATION LOCK_P LOCKED LOGGED
 
-	MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE MONTH_P MOVE
+	MAP MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE
+	MONTH_P MOVE
 
 	NAME_P NAMES NATIONAL NATURAL NCHAR NEW NEXT NO NONE
 	NOT NOTHING NOTIFY NOTNULL NOWAIT NULL_P NULLIF
@@ -10318,24 +10328,26 @@ AlterTSDictionaryStmt:
 		;
 
 AlterTSConfigurationStmt:
-			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with any_name_list
+			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ADD_MAPPING;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = false;
 					n->replace = false;
 					$$ = (Node*)n;
 				}
-			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with any_name_list
+			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ALTER_MAPPING_FOR_TOKEN;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = true;
 					n->replace = false;
 					$$ = (Node*)n;
@@ -10387,6 +10399,134 @@ any_with:	WITH									{}
 			| WITH_LA								{}
 		;
 
+opt_dictionary_map_no:
+			NO { $$ = true; }
+			| { $$ = false; }
+		;
+
+dictionary_config_comma:
+			dictionary_map_dict { $$ = $1; }
+			| dictionary_map_dict ',' dictionary_config_comma
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = TSMAP_OP_COMMA;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_config:
+			dictionary_map { $$ = $1; }
+			| dictionary_map_dict ',' dictionary_config_comma
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = TSMAP_OP_COMMA;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map:
+			dictionary_map_case { $$ = $1; }
+			| dictionary_map_set_expr { $$ = $1; }
+		;
+
+dictionary_map_action:
+			KEEP
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_KEEP;
+				n->data = NULL;
+				$$ = n;
+			}
+			| dictionary_map { $$ = $1; }
+		;
+
+opt_dictionary_map_case_else:
+			ELSE dictionary_map { $$ = $2; }
+			| { $$ = NULL; }
+		;
+
+dictionary_map_case:
+			CASE dictionary_map WHEN opt_dictionary_map_no MATCH THEN dictionary_map_action opt_dictionary_map_case_else END_P
+			{
+				DictMapCase *n = makeNode(DictMapCase);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->condition = $2;
+				n->command = $7;
+				n->elsebranch = $8;
+				n->match = !$4;
+
+				r->kind = DICT_MAP_CASE;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_set_expr_operator:
+			UNION { $$ = TSMAP_OP_UNION; }
+			| EXCEPT { $$ = TSMAP_OP_EXCEPT; }
+			| INTERSECT { $$ = TSMAP_OP_INTERSECT; }
+			| MAP { $$ = TSMAP_OP_MAP; }
+		;
+
+dictionary_map_set_expr:
+			dictionary_map_command_expr_paren { $$ = $1; }
+			| dictionary_map_case dictionary_map_set_expr_operator dictionary_map_case
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = $2;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+			| dictionary_map_command_expr_paren dictionary_map_set_expr_operator dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = $2;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_command_expr_paren:
+			'(' dictionary_map_set_expr ')'	{ $$ = $2; }
+			| dictionary_map_dict			{ $$ = $1; }
+		;
+
+dictionary_map_dict:
+			any_name
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_DICTIONARY;
+				n->data = $1;
+				$$ = n;
+			}
+		;
 
 /*****************************************************************************
  *
@@ -15037,6 +15177,7 @@ unreserved_keyword:
 			| LOCK_P
 			| LOCKED
 			| LOGGED
+			| MAP
 			| MAPPING
 			| MATCH
 			| MATERIALIZED
@@ -15341,6 +15482,7 @@ reserved_keyword:
 			| INITIALLY
 			| INTERSECT
 			| INTO
+			| KEEP
 			| LATERAL_P
 			| LEADING
 			| LIMIT
diff --git a/src/backend/tsearch/Makefile b/src/backend/tsearch/Makefile
index 227468a..e61ad4f 100644
--- a/src/backend/tsearch/Makefile
+++ b/src/backend/tsearch/Makefile
@@ -26,7 +26,7 @@ DICTFILES_PATH=$(addprefix dicts/,$(DICTFILES))
 OBJS = ts_locale.o ts_parse.o wparser.o wparser_def.o dict.o \
 	dict_simple.o dict_synonym.o dict_thesaurus.o \
 	dict_ispell.o regis.o spell.o \
-	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o
+	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o ts_configmap.o
 
 include $(top_srcdir)/src/backend/common.mk
 
diff --git a/src/backend/tsearch/ts_configmap.c b/src/backend/tsearch/ts_configmap.c
new file mode 100644
index 0000000..0e9abbe
--- /dev/null
+++ b/src/backend/tsearch/ts_configmap.c
@@ -0,0 +1,1051 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.c
+ *		internal representation of text search configuration and utilities for it
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/tsearch/ts_confimap.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include <ctype.h>
+
+#include "access/heapam.h"
+#include "access/genam.h"
+#include "access/htup_details.h"
+#include "access/sysattr.h"
+#include "catalog/indexing.h"
+#include "catalog/pg_ts_dict.h"
+#include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "utils/fmgroids.h"
+
+/*
+ * Size selected arbitrary, based on assumption that 1024 frames of stack
+ * is enough for parsing of configurations
+ */
+#define JSONB_PARSE_STATE_STACK_SIZE 1024
+
+/*
+ * Used during the parsing of TSMapElement from JSONB into internal
+ * data structures.
+ */
+typedef enum TSMapParseState
+{
+	TSMPS_WAIT_ELEMENT,
+	TSMPS_READ_DICT_OID,
+	TSMPS_READ_COMPLEX_OBJ,
+	TSMPS_READ_EXPRESSION,
+	TSMPS_READ_CASE,
+	TSMPS_READ_OPERATOR,
+	TSMPS_READ_COMMAND,
+	TSMPS_READ_CONDITION,
+	TSMPS_READ_ELSEBRANCH,
+	TSMPS_READ_MATCH,
+	TSMPS_READ_KEEP,
+	TSMPS_READ_LEFT,
+	TSMPS_READ_RIGHT
+} TSMapParseState;
+
+/*
+ * Context used during Jsonb parsing to construct a TSMap
+ */
+typedef struct TSMapJsonbParseData
+{
+	TSMapParseState states[JSONB_PARSE_STATE_STACK_SIZE];	/* Stack of states of
+															 * JSONB parsing
+															 * automaton */
+	int			statesIndex;	/* Index of current stack frame */
+	TSMapElement *element;		/* Element that is in cnstruction now */
+} TSMapJsonbParseData;
+
+static JsonbValue *TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState);
+static TSMapElement * JsonbToTSMapElement(JsonbContainer *root);
+
+/*
+ * Print name of the dictionary into StringInfo variable result
+ */
+void
+TSMapPrintDictName(Oid dictId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_ts_dict dict;
+
+	maprel = heap_open(TSDictionaryRelationId, AccessShareLock);
+	mapidx = index_open(TSDictionaryOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(dictId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	dict = (Form_pg_ts_dict) GETSTRUCT(maptup);
+	appendStringInfoString(result, dict->dictname.data);
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print the expression into StringInfo variable result
+ */
+static void
+TSMapPrintExpression(TSMapExpression *expression, StringInfo result)
+{
+
+	if (expression->left)
+		TSMapPrintElement(expression->left, result);
+
+	switch (expression->operator)
+	{
+		case TSMAP_OP_UNION:
+			appendStringInfoString(result, " UNION ");
+			break;
+		case TSMAP_OP_EXCEPT:
+			appendStringInfoString(result, " EXCEPT ");
+			break;
+		case TSMAP_OP_INTERSECT:
+			appendStringInfoString(result, " INTERSECT ");
+			break;
+		case TSMAP_OP_COMMA:
+			appendStringInfoString(result, ", ");
+			break;
+		case TSMAP_OP_MAP:
+			appendStringInfoString(result, " MAP ");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains invalid expression operator.")));
+			break;
+	}
+
+	if (expression->right)
+		TSMapPrintElement(expression->right, result);
+}
+
+/*
+ * Print the case configuration construction into StringInfo variable result
+ */
+static void
+TSMapPrintCase(TSMapCase *caseObject, StringInfo result)
+{
+	appendStringInfoString(result, "CASE ");
+
+	TSMapPrintElement(caseObject->condition, result);
+
+	appendStringInfoString(result, " WHEN ");
+	if (!caseObject->match)
+		appendStringInfoString(result, "NO ");
+	appendStringInfoString(result, "MATCH THEN ");
+
+	TSMapPrintElement(caseObject->command, result);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		appendStringInfoString(result, "\nELSE ");
+		TSMapPrintElement(caseObject->elsebranch, result);
+	}
+	appendStringInfoString(result, "\nEND");
+}
+
+/*
+ * Print the element into StringInfo result.
+ * Uses other function and serves for element type detection.
+ */
+void
+TSMapPrintElement(TSMapElement *element, StringInfo result)
+{
+	switch (element->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapPrintExpression(element->value.objectExpression, result);
+			break;
+		case TSMAP_DICTIONARY:
+			TSMapPrintDictName(element->value.objectDictionary, result);
+			break;
+		case TSMAP_CASE:
+			TSMapPrintCase(element->value.objectCase, result);
+			break;
+		case TSMAP_KEEP:
+			appendStringInfoString(result, "KEEP");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains elements with invalid type.")));
+			break;
+	}
+}
+
+/*
+ * Print the text search configuration as a text.
+ */
+Datum
+dictionary_mapping_to_text(PG_FUNCTION_ARGS)
+{
+	Oid			cfgOid = PG_GETARG_OID(0);
+	int32		tokentype = PG_GETARG_INT32(1);
+	StringInfo	rawResult;
+	text	   *result = NULL;
+	TSConfigCacheEntry *cacheEntry;
+
+	cacheEntry = lookup_ts_config_cache(cfgOid);
+	rawResult = makeStringInfo();
+	initStringInfo(rawResult);
+
+	if (cacheEntry->lenmap > tokentype && cacheEntry->map[tokentype] != NULL)
+	{
+		TSMapElement *element = cacheEntry->map[tokentype];
+
+		TSMapPrintElement(element, rawResult);
+	}
+
+	result = cstring_to_text(rawResult->data);
+	pfree(rawResult);
+	PG_RETURN_TEXT_P(result);
+}
+
+/* ----------------
+ * Functions used to convert TSMap structure into Jsonb representation
+ * ----------------
+ */
+
+/*
+ * Convert an integer value into JsonbValue
+ */
+static JsonbValue *
+IntToJsonbValue(int intValue)
+{
+	char		buffer[16];
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	/*
+	 * String size is based on limit of int capacity up to 12 chars with sign
+	 * and NULL-character
+	 */
+	memset(buffer, 0, sizeof(char) * 12);
+
+	pg_ltoa(intValue, buffer);
+	value->type = jbvNumeric;
+	value->val.numeric = DatumGetNumeric(DirectFunctionCall3(numeric_in,
+															 CStringGetDatum(buffer),
+															 ObjectIdGetDatum(InvalidOid),
+															 Int32GetDatum(-1)
+															 ));
+	return value;
+}
+
+/*
+ * Convert a FTS configuration expression into JsonbValue
+ */
+static JsonbValue *
+TSMapExpressionToJsonbValue(TSMapExpression *expression, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("operator");
+	key.val.string.val = "operator";
+	value = IntToJsonbValue(expression->operator);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("left");
+	key.val.string.val = "left";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->left, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("right");
+	key.val.string.val = "right";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->right, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS configuration case into JsonbValue
+ */
+static JsonbValue *
+TSMapCaseToJsonbValue(TSMapCase *caseObject, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("condition");
+	key.val.string.val = "condition";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->condition, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("command");
+	key.val.string.val = "command";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->command, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		key.type = jbvString;
+		key.val.string.len = strlen("elsebranch");
+		key.val.string.val = "elsebranch";
+
+		pushJsonbValue(&jsonbState, WJB_KEY, &key);
+		value = TSMapElementToJsonbValue(caseObject->elsebranch, jsonbState);
+
+		if (value && IsAJsonbScalar(value))
+			pushJsonbValue(&jsonbState, WJB_VALUE, value);
+	}
+
+	key.type = jbvString;
+	key.val.string.len = strlen("match");
+	key.val.string.val = "match";
+
+	value = IntToJsonbValue(caseObject->match ? 1 : 0);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS KEEP command into JsonbValue
+ */
+static JsonbValue *
+TSMapKeepToJsonbValue(JsonbParseState *jsonbState)
+{
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	value->type = jbvString;
+	value->val.string.len = strlen("keep");
+	value->val.string.val = "keep";
+
+	return pushJsonbValue(&jsonbState, WJB_VALUE, value);
+}
+
+/*
+ * Convert a FTS element into JsonbValue. Common point for all types of TSMapElement
+ */
+JsonbValue *
+TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState)
+{
+	JsonbValue *result = NULL;
+
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_EXPRESSION:
+				result = TSMapExpressionToJsonbValue(element->value.objectExpression, jsonbState);
+				break;
+			case TSMAP_DICTIONARY:
+				result = IntToJsonbValue(element->value.objectDictionary);
+				break;
+			case TSMAP_CASE:
+				result = TSMapCaseToJsonbValue(element->value.objectCase, jsonbState);
+				break;
+			case TSMAP_KEEP:
+				result = TSMapKeepToJsonbValue(jsonbState);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Required text search configuration contains elements with invalid type.")));
+				break;
+		}
+	}
+	return result;
+}
+
+/*
+ * Convert a FTS configuration into Jsonb
+ */
+Jsonb *
+TSMapToJsonb(TSMapElement *element)
+{
+	JsonbParseState *jsonbState = NULL;
+	JsonbValue *out;
+	Jsonb	   *result;
+
+	out = TSMapElementToJsonbValue(element, jsonbState);
+
+	result = JsonbValueToJsonb(out);
+	return result;
+}
+
+/* ----------------
+ * Functions used to get TSMap structure from Jsonb representation
+ * ----------------
+ */
+
+/*
+ * Extract an integer from JsonbValue
+ */
+static int
+JsonbValueToInt(JsonbValue *value)
+{
+	char	   *str;
+
+	str = DatumGetCString(DirectFunctionCall1(numeric_out, NumericGetDatum(value->val.numeric)));
+	return pg_atoi(str, sizeof(int), 0);
+}
+
+/*
+ * Check is a key one of FTS configuration case fields
+ */
+static bool
+IsTSMapCaseKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for apropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "match") == 0 || strcmp(key, "condition") == 0 || strcmp(key, "command") == 0 || strcmp(key, "elsebranch") == 0;
+}
+
+/*
+ * Check is a key one of FTS configuration expression fields
+ */
+static bool
+IsTSMapExpressionKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for apropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "operator") == 0 || strcmp(key, "left") == 0 || strcmp(key, "right") == 0;
+}
+
+/*
+ * Configure parseData->element according to value (key)
+ */
+static void
+JsonbBeginObjectKey(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *parentElement = parseData->element;
+
+	parseData->element = palloc0(sizeof(TSMapElement));
+	parseData->element->parent = parentElement;
+
+	/* Overwrite object-type state based on key */
+	if (IsTSMapExpressionKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_EXPRESSION;
+		parseData->element->type = TSMAP_EXPRESSION;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapExpression));
+	}
+	else if (IsTSMapCaseKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CASE;
+		parseData->element->type = TSMAP_CASE;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapCase));
+	}
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration expression
+ */
+static void
+JsonbKeyExpressionProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for apropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "operator") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_OPERATOR;
+	else if (strcmp(key, "left") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_LEFT;
+	else if (strcmp(key, "right") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_RIGHT;
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration case
+ */
+static void
+JsonbKeyCaseProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for apropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "condition") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CONDITION;
+	else if (strcmp(key, "command") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_COMMAND;
+	else if (strcmp(key, "elsebranch") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_ELSEBRANCH;
+	else if (strcmp(key, "match") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_MATCH;
+}
+
+/*
+ * Convert a JsonbValue into OID TSMapElement
+ */
+static TSMapElement *
+JsonbValueToOidElement(JsonbValue *value, TSMapElement *parent)
+{
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	element->type = TSMAP_DICTIONARY;
+	element->value.objectDictionary = JsonbValueToInt(value);
+	return element;
+}
+
+/*
+ * Convert a JsonbValue into string TSMapElement.
+ * Used for special values such as KEEP command
+ */
+static TSMapElement *
+JsonbValueReadString(JsonbValue *value, TSMapElement *parent)
+{
+	char	   *str;
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	str = palloc0(sizeof(char) * (value->val.string.len + 1));
+	memcpy(str, value->val.string.val, sizeof(char) * value->val.string.len);
+
+	if (strcmp(str, "keep") == 0)
+		element->type = TSMAP_KEEP;
+
+	pfree(str);
+
+	return element;
+}
+
+/*
+ * Process a JsonbValue object
+ */
+static void
+JsonbProcessElement(JsonbIteratorToken r, JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *element = NULL;
+
+	switch (r)
+	{
+		case WJB_KEY:
+
+			/*
+			 * Construct an TSMapElement object. At first key inside Jsonb
+			 * object a type is selected based on key.
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMPLEX_OBJ)
+				JsonbBeginObjectKey(value, parseData);
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_EXPRESSION)
+				JsonbKeyExpressionProcessing(value, parseData);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CASE)
+				JsonbKeyCaseProcessing(value, parseData);
+
+			break;
+		case WJB_BEGIN_OBJECT:
+
+			/*
+			 * Begin construction of new object
+			 */
+			parseData->statesIndex++;
+			parseData->states[parseData->statesIndex] = TSMPS_READ_COMPLEX_OBJ;
+			break;
+		case WJB_END_OBJECT:
+
+			/*
+			 * Save constructed object based on current state of parser
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->parent->value.objectExpression->left = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->parent->value.objectExpression->right = parseData->element;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->parent->value.objectCase->condition = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->parent->value.objectCase->command = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->parent->value.objectCase->elsebranch = parseData->element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_VALUE:
+
+			/*
+			 * Save a value inside constructing object
+			 */
+			if (value.type == jbvBinary)
+				element = JsonbToTSMapElement(value.val.binary.data);
+			else if (value.type == jbvString)
+				element = JsonbValueReadString(&value, parseData->element);
+			else if (value.type == jbvNumeric)
+				element = JsonbValueToOidElement(&value, parseData->element);
+			else
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains object with invalid type.")));
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->value.objectCase->condition = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->value.objectCase->command = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->value.objectCase->elsebranch = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_MATCH)
+				parseData->element->value.objectCase->match = JsonbValueToInt(&value) == 1 ? true : false;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_OPERATOR)
+				parseData->element->value.objectExpression->operator = JsonbValueToInt(&value);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->value.objectExpression->left = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->value.objectExpression->right = element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_ELEM:
+
+			/*
+			 * Store a simple element such as dictionary OID
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_WAIT_ELEMENT)
+			{
+				if (parseData->element != NULL)
+					parseData->element = JsonbValueToOidElement(&value, parseData->element->parent);
+				else
+					parseData->element = JsonbValueToOidElement(&value, NULL);
+			}
+			break;
+		default:
+			/* Ignore unused Jsonb tokens */
+			break;
+	}
+}
+
+/*
+ * Convert a JsonbContainer into TSMapElement
+ */
+static TSMapElement *
+JsonbToTSMapElement(JsonbContainer *root)
+{
+	TSMapJsonbParseData parseData;
+	JsonbIteratorToken r;
+	JsonbIterator *it;
+	JsonbValue	val;
+
+	parseData.statesIndex = 0;
+	parseData.states[parseData.statesIndex] = TSMPS_WAIT_ELEMENT;
+	parseData.element = NULL;
+
+	it = JsonbIteratorInit(root);
+
+	while ((r = JsonbIteratorNext(&it, &val, true)) != WJB_DONE)
+		JsonbProcessElement(r, val, &parseData);
+
+	return parseData.element;
+}
+
+/*
+ * Convert a Jsonb into TSMapElement
+ */
+TSMapElement *
+JsonbToTSMap(Jsonb *json)
+{
+	JsonbContainer *root = &json->root;
+
+	return JsonbToTSMapElement(root);
+}
+
+/* ----------------
+ * Text Search Configuration Map Utils
+ * ----------------
+ */
+
+/*
+ * Dynamicly extendable list of OIDs
+ */
+typedef struct OidList
+{
+	Oid		   *data;
+	int			size;			/* Size of data array. Uninitialized elements
+								 * in data filled with InvalidOid */
+} OidList;
+
+/*
+ * Initialize a list
+ */
+static OidList *
+OidListInit()
+{
+	OidList    *result = palloc0(sizeof(OidList));
+
+	result->size = 1;
+	result->data = palloc0(result->size * sizeof(Oid));
+	result->data[0] = InvalidOid;
+	return result;
+}
+
+/*
+ * Add a new OID into list. If it is already stored in list, it won't be add second time.
+ */
+static void
+OidListAdd(OidList *list, Oid oid)
+{
+	int			i;
+
+	/* Search for the Oid in the list */
+	for (i = 0; list->data[i] != InvalidOid; i++)
+		if (list->data[i] == oid)
+			return;
+
+	/* If not found, insert it in the end of the list */
+	if (i >= list->size - 1)
+	{
+		int			j;
+
+		list->size = list->size * 2;
+		list->data = repalloc(list->data, sizeof(Oid) * list->size);
+
+		for (j = i; j < list->size; j++)
+			list->data[j] = InvalidOid;
+	}
+	list->data[i] = oid;
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement.
+ * Used for internal recursive calls.
+ */
+static void
+TSMapGetDictionariesInternal(TSMapElement *config, OidList *list)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapGetDictionariesInternal(config->value.objectExpression->left, list);
+			TSMapGetDictionariesInternal(config->value.objectExpression->right, list);
+			break;
+		case TSMAP_CASE:
+			TSMapGetDictionariesInternal(config->value.objectCase->command, list);
+			TSMapGetDictionariesInternal(config->value.objectCase->condition, list);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapGetDictionariesInternal(config->value.objectCase->elsebranch, list);
+			break;
+		case TSMAP_DICTIONARY:
+			OidListAdd(list, config->value.objectDictionary);
+			break;
+	}
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement
+ */
+Oid *
+TSMapGetDictionaries(TSMapElement *config)
+{
+	Oid		   *result;
+	OidList    *list = OidListInit();
+
+	TSMapGetDictionariesInternal(config, list);
+
+	result = list->data;
+	pfree(list);
+
+	return result;
+}
+
+/*
+ * Replace one dictionary OID with another in all instances inside a configuration
+ */
+void
+TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapReplaceDictionary(config->value.objectExpression->left, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectExpression->right, oldDict, newDict);
+			break;
+		case TSMAP_CASE:
+			TSMapReplaceDictionary(config->value.objectCase->command, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectCase->condition, oldDict, newDict);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapReplaceDictionary(config->value.objectCase->elsebranch, oldDict, newDict);
+			break;
+		case TSMAP_DICTIONARY:
+			if (config->value.objectDictionary == oldDict)
+				config->value.objectDictionary = newDict;
+			break;
+	}
+}
+
+/* ----------------
+ * Text Search Configuration Map Memory Management
+ * ----------------
+ */
+
+/*
+ * Move a FTS configuration expression to another memory context
+ */
+static TSMapElement *
+TSMapExpressionMoveToMemoryContext(TSMapExpression *expression, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapExpression *resultExpression = MemoryContextAlloc(context, sizeof(TSMapExpression));
+
+	memset(resultExpression, 0, sizeof(TSMapExpression));
+	result->value.objectExpression = resultExpression;
+	result->type = TSMAP_EXPRESSION;
+
+	resultExpression->operator = expression->operator;
+
+	resultExpression->left = TSMapMoveToMemoryContext(expression->left, context);
+	resultExpression->left->parent = result;
+
+	resultExpression->right = TSMapMoveToMemoryContext(expression->right, context);
+	resultExpression->right->parent = result;
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration case to another memory context
+ */
+static TSMapElement *
+TSMapCaseMoveToMemoryContext(TSMapCase *caseObject, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapCase  *resultCaseObject = MemoryContextAlloc(context, sizeof(TSMapCase));
+
+	memset(resultCaseObject, 0, sizeof(TSMapCase));
+	result->value.objectCase = resultCaseObject;
+	result->type = TSMAP_CASE;
+
+	resultCaseObject->match = caseObject->match;
+
+	resultCaseObject->command = TSMapMoveToMemoryContext(caseObject->command, context);
+	resultCaseObject->command->parent = result;
+
+	resultCaseObject->condition = TSMapMoveToMemoryContext(caseObject->condition, context);
+	resultCaseObject->condition->parent = result;
+
+	if (caseObject->elsebranch != NULL)
+	{
+		resultCaseObject->elsebranch = TSMapMoveToMemoryContext(caseObject->elsebranch, context);
+		resultCaseObject->elsebranch->parent = result;
+	}
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration to another memory context
+ */
+TSMapElement *
+TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context)
+{
+	TSMapElement *result = NULL;
+
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			result = TSMapExpressionMoveToMemoryContext(config->value.objectExpression, context);
+			break;
+		case TSMAP_CASE:
+			result = TSMapCaseMoveToMemoryContext(config->value.objectCase, context);
+			break;
+		case TSMAP_DICTIONARY:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_DICTIONARY;
+			result->value.objectDictionary = config->value.objectDictionary;
+			break;
+		case TSMAP_KEEP:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_KEEP;
+			result->value.object = NULL;
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains object with invalid type.")));
+			break;
+	}
+
+	return result;
+}
+
+/*
+ * Free memory occupied by FTS configuration expression
+ */
+static void
+TSMapExpressionFree(TSMapExpression *expression)
+{
+	if (expression->left)
+		TSMapElementFree(expression->left);
+	if (expression->right)
+		TSMapElementFree(expression->right);
+	pfree(expression);
+}
+
+/*
+ * Free memory occupied by FTS configuration case
+ */
+static void
+TSMapCaseFree(TSMapCase *caseObject)
+{
+	TSMapElementFree(caseObject->condition);
+	TSMapElementFree(caseObject->command);
+	TSMapElementFree(caseObject->elsebranch);
+	pfree(caseObject);
+}
+
+/*
+ * Free memory occupied by FTS configuration element
+ */
+void
+TSMapElementFree(TSMapElement *element)
+{
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_CASE:
+				TSMapCaseFree(element->value.objectCase);
+				break;
+			case TSMAP_EXPRESSION:
+				TSMapExpressionFree(element->value.objectExpression);
+				break;
+		}
+		pfree(element);
+	}
+}
+
+/*
+ * Do a deep comparison of two TSMapElements. Doesn't check parents of elements
+ */
+bool
+TSMapElementEquals(TSMapElement *a, TSMapElement *b)
+{
+	bool		result = true;
+
+	if (a->type == b->type)
+	{
+		switch (a->type)
+		{
+			case TSMAP_CASE:
+				if (!TSMapElementEquals(a->value.objectCase->condition, b->value.objectCase->condition))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectCase->command, b->value.objectCase->command))
+					result = false;
+
+				if (a->value.objectCase->elsebranch != NULL && b->value.objectCase->elsebranch != NULL)
+				{
+					if (!TSMapElementEquals(a->value.objectCase->elsebranch, b->value.objectCase->elsebranch))
+						result = false;
+				}
+				else if (a->value.objectCase->elsebranch != NULL || b->value.objectCase->elsebranch != NULL)
+					result = false;
+
+				if (a->value.objectCase->match != b->value.objectCase->match)
+					result = false;
+				break;
+			case TSMAP_EXPRESSION:
+				if (!TSMapElementEquals(a->value.objectExpression->left, b->value.objectExpression->left))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectExpression->right, b->value.objectExpression->right))
+					result = false;
+				if (a->value.objectExpression->operator != b->value.objectExpression->operator)
+					result = false;
+				break;
+			case TSMAP_DICTIONARY:
+				result = a->value.objectDictionary == b->value.objectDictionary;
+				break;
+			case TSMAP_KEEP:
+				result = true;
+		}
+	}
+	else
+		result = false;
+
+	return result;
+}
diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c
index 7b69ef5..5c3977d 100644
--- a/src/backend/tsearch/ts_parse.c
+++ b/src/backend/tsearch/ts_parse.c
@@ -16,19 +16,30 @@
 
 #include "tsearch/ts_cache.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "funcapi.h"
 
 #define IGNORE_LONGLEXEME	1
 
-/*
+/*-------------------
  * Lexize subsystem
+ *-------------------
  */
 
 typedef struct ParsedLex
 {
-	int			type;
-	char	   *lemm;
-	int			lenlemm;
-	struct ParsedLex *next;
+	int			type;			/* Token type */
+	char	   *lemm;			/* Token itself */
+	int			lenlemm;		/* Length of the token string */
+	int			maplen;			/* Length of the map */
+	bool	   *accepted;		/* Is accepted by some dictionary */
+	bool	   *rejected;		/* Is rejected by all dictionaries */
+	bool	   *notFinished;	/* Some dictionary not finished processing and
+								 * waits for more tokens */
+	struct ParsedLex *next;		/* Next token in the list */
+	TSMapElement *relatedRule;	/* Rule which is used to produce lexemes from
+								 * the token */
 } ParsedLex;
 
 typedef struct ListParsedLex
@@ -37,37 +48,98 @@ typedef struct ListParsedLex
 	ParsedLex  *tail;
 } ListParsedLex;
 
-typedef struct
+typedef struct DictState
 {
-	TSConfigCacheEntry *cfg;
-	Oid			curDictId;
-	int			posDict;
-	DictSubState dictState;
-	ParsedLex  *curSub;
-	ListParsedLex towork;		/* current list to work */
-	ListParsedLex waste;		/* list of lexemes that already lexized */
+	Oid			relatedDictionary;	/* DictState contains state of dictionary
+									 * with this Oid */
+	DictSubState subState;		/* Internal state of the dictionary used to
+								 * store some state between dictionary calls */
+	ListParsedLex acceptedTokens;	/* Tokens which are processed and
+									 * accepted, used in last returned result
+									 * by the dictionary */
+	ListParsedLex intermediateTokens;	/* Tokens which are not accepted, but
+										 * were processed by thesaurus-like
+										 * dictionary */
+	bool		storeToAccepted;	/* Should current token be appended to
+									 * accepted or intermediate tokens */
+	bool		processed;		/* Is the dictionary take control during
+								 * current token processing */
+	TSLexeme   *tmpResult;		/* Last result retued by thesaurus-like
+								 * dictionary, if dictionary still waiting for
+								 * more lexemes */
+} DictState;
+
+typedef struct DictStateList
+{
+	int			listLength;
+	DictState  *states;
+} DictStateList;
 
-	/*
-	 * fields to store last variant to lexize (basically, thesaurus or similar
-	 * to, which wants	several lexemes
-	 */
+typedef struct LexemesBufferEntry
+{
+	Oid			dictId;
+	TSMapElement *key;
+	ParsedLex  *token;
+	TSLexeme   *data;
+} LexemesBufferEntry;
 
-	ParsedLex  *lastRes;
-	TSLexeme   *tmpRes;
+typedef struct LexemesBuffer
+{
+	int			size;
+	LexemesBufferEntry *data;
+} LexemesBuffer;
+
+typedef struct ResultStorage
+{
+	TSLexeme   *lexemes;		/* Processed lexemes, which is not yet
+								 * accepted */
+	TSLexeme   *accepted;
+} ResultStorage;
+
+typedef struct LexizeData
+{
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	DictStateList dslist;		/* List of all currently stored states of
+								 * dictionaries */
+	ListParsedLex towork;		/* Current list to work */
+	ListParsedLex waste;		/* List of lexemes that already lexized */
+	LexemesBuffer buffer;		/* Buffer of processed lexemes. Used to avoid
+								 * multiple execution of token lexize process
+								 * with same parameters */
+	ResultStorage delayedResults;	/* Results that should be returned but may
+									 * be rejected in future */
+	Oid			skipDictionary; /* The dictionary we should skip during
+								 * processing. Used to avoid infinite loop in
+								 * configuration with phrase dictionary */
+	bool		debugContext;	/* If true, relatedRule attribute is filled */
 } LexizeData;
 
-static void
-LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+typedef struct TSDebugContext
 {
-	ld->cfg = cfg;
-	ld->curDictId = InvalidOid;
-	ld->posDict = 0;
-	ld->towork.head = ld->towork.tail = ld->curSub = NULL;
-	ld->waste.head = ld->waste.tail = NULL;
-	ld->lastRes = NULL;
-	ld->tmpRes = NULL;
-}
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	TSParserCacheEntry *prsobj; /* Parser context of current ts_debug context */
+	LexDescr   *tokenTypes;		/* Token types supported by current parser */
+	void	   *prsdata;		/* Parser data of current ts_debug context */
+	LexizeData	ldata;			/* Lexize data of current ts_debug context */
+	int			tokentype;		/* Last token tokentype */
+	TSLexeme   *savedLexemes;	/* Last token lexemes stored for ts_debug
+								 * output */
+	ParsedLex  *leftTokens;		/* Corresponded ParsedLex */
+} TSDebugContext;
+
+static TSLexeme *TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression);
+static TSLexeme *LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config);
+
+/*-------------------
+ * ListParsedLex API
+ *-------------------
+ */
 
+/*
+ * Add a ParsedLex to the end of the list
+ */
 static void
 LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 {
@@ -81,274 +153,1291 @@ LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 	newpl->next = NULL;
 }
 
-static ParsedLex *
-LPLRemoveHead(ListParsedLex *list)
-{
-	ParsedLex  *res = list->head;
+/*
+ * Add a copy of ParsedLex to the end of the list
+ */
+static void
+LPLAddTailCopy(ListParsedLex *list, ParsedLex *newpl)
+{
+	ParsedLex  *copy = palloc0(sizeof(ParsedLex));
+
+	copy->lenlemm = newpl->lenlemm;
+	copy->type = newpl->type;
+	copy->lemm = newpl->lemm;
+	copy->relatedRule = newpl->relatedRule;
+	copy->next = NULL;
+
+	if (list->tail)
+	{
+		list->tail->next = copy;
+		list->tail = copy;
+	}
+	else
+		list->head = list->tail = copy;
+}
+
+/*
+ * Remove the head of the list. Return pointer to detached head
+ */
+static ParsedLex *
+LPLRemoveHead(ListParsedLex *list)
+{
+	ParsedLex  *res = list->head;
+
+	if (list->head)
+		list->head = list->head->next;
+
+	if (list->head == NULL)
+		list->tail = NULL;
+
+	return res;
+}
+
+/*
+ * Remove all ParsedLex from the list
+ */
+static void
+LPLClear(ListParsedLex *list)
+{
+	ParsedLex  *tmp,
+			   *ptr = list->head;
+
+	while (ptr)
+	{
+		tmp = ptr->next;
+		pfree(ptr);
+		ptr = tmp;
+	}
+
+	list->head = list->tail = NULL;
+}
+
+/*-------------------
+ * LexizeData manipulation functions
+ *-------------------
+ */
+
+/*
+ * Initialize empty LexizeData object
+ */
+static void
+LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+{
+	ld->cfg = cfg;
+	ld->skipDictionary = InvalidOid;
+	ld->towork.head = ld->towork.tail = NULL;
+	ld->waste.head = ld->waste.tail = NULL;
+	ld->dslist.listLength = 0;
+	ld->dslist.states = NULL;
+	ld->buffer.size = 0;
+	ld->buffer.data = NULL;
+	ld->delayedResults.lexemes = NULL;
+	ld->delayedResults.accepted = NULL;
+}
+
+/*
+ * Add a token to the processing queue
+ */
+static void
+LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
+{
+	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+
+	newpl->type = type;
+	newpl->lemm = lemm;
+	newpl->lenlemm = lenlemm;
+	newpl->relatedRule = NULL;
+	LPLAddTail(&ld->towork, newpl);
+}
+
+/*
+ * Remove head of the processing queue
+ */
+static void
+RemoveHead(LexizeData *ld)
+{
+	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+}
+
+/*
+ * Set token corresponded to current lexeme
+ */
+static void
+setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+{
+	if (correspondLexem)
+		*correspondLexem = ld->waste.head;
+	else
+		LPLClear(&ld->waste);
+
+	ld->waste.head = ld->waste.tail = NULL;
+}
+
+/*-------------------
+ * DictState manipulation functions
+ *-------------------
+ */
+
+/*
+ * Get a state of dictionary based on its oid
+ */
+static DictState *
+DictStateListGet(DictStateList *list, Oid dictId)
+{
+	int			i;
+	DictState  *result = NULL;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			result = &list->states[i];
+
+	return result;
+}
+
+/*
+ * Remove a state of dictionary based on its oid
+ */
+static void
+DictStateListRemove(DictStateList *list, Oid dictId)
+{
+	int			i;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			break;
+
+	if (i != list->listLength)
+	{
+		memcpy(list->states + i, list->states + i + 1, sizeof(DictState) * (list->listLength - i - 1));
+		list->listLength--;
+		if (list->listLength == 0)
+			list->states = NULL;
+		else
+			list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	}
+}
+
+/*
+ * Insert a state of dictionary with specified oid
+ */
+static DictState *
+DictStateListAdd(DictStateList *list, DictState *state)
+{
+	DictStateListRemove(list, state->relatedDictionary);
+
+	list->listLength++;
+	if (list->states)
+		list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	else
+		list->states = palloc0(sizeof(DictState) * list->listLength);
+
+	memcpy(list->states + list->listLength - 1, state, sizeof(DictState));
+
+	return list->states + list->listLength - 1;
+}
+
+/*
+ * Remove states of all dictionaries
+ */
+static void
+DictStateListClear(DictStateList *list)
+{
+	list->listLength = 0;
+	if (list->states)
+		pfree(list->states);
+	list->states = NULL;
+}
+
+/*-------------------
+ * LexemesBuffer manipulation functions
+ *-------------------
+ */
+
+/*
+ * Check if there is a saved lexeme generated by specified TSMapElement
+ */
+static bool
+LexemesBufferContains(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			return true;
+
+	return false;
+}
+
+/*
+ * Get a saved lexeme generated by specified TSMapElement
+ */
+static TSLexeme *
+LexemesBufferGet(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+	TSLexeme   *result = NULL;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			result = buffer->data[i].data;
+
+	return result;
+}
+
+/*
+ * Remove a saved lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferRemove(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			break;
+
+	if (i != buffer->size)
+	{
+		memcpy(buffer->data + i, buffer->data + i + 1, sizeof(LexemesBufferEntry) * (buffer->size - i - 1));
+		buffer->size--;
+		if (buffer->size == 0)
+			buffer->data = NULL;
+		else
+			buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	}
+}
+
+/*
+ * Same a lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferAdd(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token, TSLexeme *data)
+{
+	LexemesBufferRemove(buffer, key, token);
+
+	buffer->size++;
+	if (buffer->data)
+		buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	else
+		buffer->data = palloc0(sizeof(LexemesBufferEntry) * buffer->size);
+
+	buffer->data[buffer->size - 1].token = token;
+	buffer->data[buffer->size - 1].key = key;
+	buffer->data[buffer->size - 1].data = data;
+}
+
+/*
+ * Remove all lexemes saved in a buffer
+ */
+static void
+LexemesBufferClear(LexemesBuffer *buffer)
+{
+	int			i;
+	bool	   *skipEntry = palloc0(sizeof(bool) * buffer->size);
+
+	for (i = 0; i < buffer->size; i++)
+	{
+		if (buffer->data[i].data != NULL && !skipEntry[i])
+		{
+			int			j;
+
+			for (j = 0; j < buffer->size; j++)
+				if (buffer->data[i].data == buffer->data[j].data)
+					skipEntry[j] = true;
+
+			pfree(buffer->data[i].data);
+		}
+	}
+
+	buffer->size = 0;
+	if (buffer->data)
+		pfree(buffer->data);
+	buffer->data = NULL;
+}
+
+/*-------------------
+ * TSLexeme util functions
+ *-------------------
+ */
+
+/*
+ * Get size of TSLexeme except empty-lexeme
+ */
+static int
+TSLexemeGetSize(TSLexeme *lex)
+{
+	int			result = 0;
+	TSLexeme   *ptr = lex;
+
+	while (ptr && ptr->lexeme)
+	{
+		result++;
+		ptr++;
+	}
+
+	return result;
+}
+
+/*
+ * Remove repeated lexemes. Also remove copies of whole nvariant groups.
+ */
+static TSLexeme *
+TSLexemeRemoveDuplications(TSLexeme *lexeme)
+{
+	TSLexeme   *res;
+	int			curLexIndex;
+	int			i;
+	int			lexemeSize = TSLexemeGetSize(lexeme);
+	int			shouldCopyCount = lexemeSize;
+	bool	   *shouldCopy;
+
+	if (lexeme == NULL)
+		return NULL;
+
+	shouldCopy = palloc(sizeof(bool) * lexemeSize);
+	memset(shouldCopy, true, sizeof(bool) * lexemeSize);
+
+	for (curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		for (i = curLexIndex + 1; i < lexemeSize; i++)
+		{
+			if (!shouldCopy[i])
+				continue;
+
+			if (strcmp(lexeme[curLexIndex].lexeme, lexeme[i].lexeme) == 0)
+			{
+				if (lexeme[curLexIndex].nvariant == lexeme[i].nvariant)
+				{
+					shouldCopy[i] = false;
+					shouldCopyCount--;
+					continue;
+				}
+				else
+				{
+					/*
+					 * Check for same set of lexemes in another nvariant
+					 * series
+					 */
+					int			nvariantCountL = 0;
+					int			nvariantCountR = 0;
+					int			nvariantOverlap = 1;
+					int			j;
+
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[curLexIndex].nvariant == lexeme[j].nvariant)
+							nvariantCountL++;
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[i].nvariant == lexeme[j].nvariant)
+							nvariantCountR++;
+
+					if (nvariantCountL != nvariantCountR)
+						continue;
+
+					for (j = 1; j < nvariantCountR; j++)
+					{
+						if (strcmp(lexeme[curLexIndex + j].lexeme, lexeme[i + j].lexeme) == 0
+							&& lexeme[curLexIndex + j].nvariant == lexeme[i + j].nvariant)
+							nvariantOverlap++;
+					}
+
+					if (nvariantOverlap != nvariantCountR)
+						continue;
+
+					for (j = 0; j < nvariantCountR; j++)
+						shouldCopy[i + j] = false;
+				}
+			}
+		}
+	}
+
+	res = palloc0(sizeof(TSLexeme) * (shouldCopyCount + 1));
+
+	for (i = 0, curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		if (shouldCopy[curLexIndex])
+		{
+			memcpy(res + i, lexeme + curLexIndex, sizeof(TSLexeme));
+			i++;
+		}
+	}
+
+	pfree(shouldCopy);
+	return res;
+}
+
+/*
+ * Combine two lexeme lists with respect to positions
+ */
+static TSLexeme *
+TSLexemeMergePositions(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+
+	if (left != NULL || right != NULL)
+	{
+		int			left_i = 0;
+		int			right_i = 0;
+		int			left_max_nvariant = 0;
+		int			i;
+		int			left_size = TSLexemeGetSize(left);
+		int			right_size = TSLexemeGetSize(right);
+
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		for (i = 0; i < right_size; i++)
+			right[i].nvariant += left_max_nvariant;
+		if (right && right[0].flags & TSL_ADDPOS)
+			right[0].flags &= ~TSL_ADDPOS;
+
+		i = 0;
+		while (i < left_size + right_size)
+		{
+			if (left_i < left_size)
+			{
+				do
+				{
+					result[i++] = left[left_i++];
+				} while (left && left[left_i].lexeme && (left[left_i].flags & TSL_ADDPOS) == 0);
+			}
+
+			if (right_i < right_size)
+			{
+				do
+				{
+					result[i++] = right[right_i++];
+				} while (right && right[right_i].lexeme && (right[right_i].flags & TSL_ADDPOS) == 0);
+			}
+		}
+	}
+	return result;
+}
+
+/*
+ * Split lexemes generated by regular dictionaries and multi-input dictionaries
+ * and combine them with respect to positions
+ */
+static TSLexeme *
+TSLexemeFilterMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *result;
+	TSLexeme   *ptr = lexemes;
+	int			multi_lexemes = 0;
+
+	while (ptr && ptr->lexeme)
+	{
+		if (ptr->flags & TSL_MULTI)
+			multi_lexemes++;
+		ptr++;
+	}
+
+	if (multi_lexemes > 0)
+	{
+		TSLexeme   *lexemes_multi = palloc0(sizeof(TSLexeme) * (multi_lexemes + 1));
+		TSLexeme   *lexemes_rest = palloc0(sizeof(TSLexeme) * (TSLexemeGetSize(lexemes) - multi_lexemes + 1));
+		int			rest_i = 0;
+		int			multi_i = 0;
+
+		ptr = lexemes;
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr->flags & TSL_MULTI)
+				lexemes_multi[multi_i++] = *ptr;
+			else
+				lexemes_rest[rest_i++] = *ptr;
+
+			ptr++;
+		}
+		result = TSLexemeMergePositions(lexemes_rest, lexemes_multi);
+	}
+	else
+	{
+		result = TSLexemeMergePositions(lexemes, NULL);
+	}
+
+	return result;
+}
+
+/*
+ * Mark lexemes as generated by multi-input (thesaurus-like) dictionary
+ */
+static void
+TSLexemeMarkMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *ptr = lexemes;
+
+	while (ptr && ptr->lexeme)
+	{
+		ptr->flags |= TSL_MULTI;
+		ptr++;
+	}
+}
+
+/*-------------------
+ * Lexemes set operations
+ *-------------------
+ */
+
+/*
+ * Combine left and right lexeme lists into one.
+ * If append is true, right lexemes added after last left lexeme with TSL_ADDPOS flag
+ */
+static TSLexeme *
+TSLexemeUnionOpt(TSLexeme *left, TSLexeme *right, bool append)
+{
+	TSLexeme   *result;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+	int			left_max_nvariant = 0;
+	int			i;
+
+	if (left == NULL && right == NULL)
+	{
+		result = NULL;
+	}
+	else
+	{
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		if (left_size > 0)
+			memcpy(result, left, sizeof(TSLexeme) * left_size);
+		if (right_size > 0)
+			memcpy(result + left_size, right, sizeof(TSLexeme) * right_size);
+		if (append && left_size > 0 && right_size > 0)
+			result[left_size].flags |= TSL_ADDPOS;
+
+		for (i = left_size; i < left_size + right_size; i++)
+			result[i].nvariant += left_max_nvariant;
+	}
+
+	return result;
+}
+
+/*
+ * Combine left and right lexeme lists into one
+ */
+static TSLexeme *
+TSLexemeUnion(TSLexeme *left, TSLexeme *right)
+{
+	return TSLexemeUnionOpt(left, right, false);
+}
+
+/*
+ * Remove common lexemes and return only which is stored in left list
+ */
+static TSLexeme *
+TSLexemeExcept(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (!found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*
+ * Keep only common lexemes
+ */
+static TSLexeme *
+TSLexemeIntersect(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*-------------------
+ * Result storage functions
+ *-------------------
+ */
+
+/*
+ * Add a lexeme to the result storage
+ */
+static void
+ResultStorageAdd(ResultStorage *storage, ParsedLex *token, TSLexeme *lexs)
+{
+	TSLexeme   *oldLexs = storage->lexemes;
+
+	storage->lexemes = TSLexemeUnionOpt(storage->lexemes, lexs, true);
+	if (oldLexs)
+		pfree(oldLexs);
+}
+
+/*
+ * Move all saved lexemes to accepted list
+ */
+static void
+ResultStorageMoveToAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+	{
+		TSLexeme   *prevAccepted = storage->accepted;
+
+		storage->accepted = TSLexemeUnionOpt(storage->accepted, storage->lexemes, true);
+		if (prevAccepted)
+			pfree(prevAccepted);
+		if (storage->lexemes)
+			pfree(storage->lexemes);
+	}
+	else
+	{
+		storage->accepted = storage->lexemes;
+	}
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all non-accepted lexemes
+ */
+static void
+ResultStorageClearLexemes(ResultStorage *storage)
+{
+	if (storage->lexemes)
+		pfree(storage->lexemes);
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all accepted lexemes
+ */
+static void
+ResultStorageClearAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+		pfree(storage->accepted);
+	storage->accepted = NULL;
+}
+
+/*-------------------
+ * Condition and command execution
+ *-------------------
+ */
+
+/*
+ * Process a token by the dictionary
+ */
+static TSLexeme *
+LexizeExecDictionary(LexizeData *ld, ParsedLex *token, TSMapElement *dictionary)
+{
+	TSLexeme   *res;
+	TSDictionaryCacheEntry *dict;
+	DictSubState subState;
+	Oid			dictId = dictionary->value.objectDictionary;
+
+	if (ld->skipDictionary == dictId)
+		return NULL;
+
+	if (LexemesBufferContains(&ld->buffer, dictionary, token))
+		res = LexemesBufferGet(&ld->buffer, dictionary, token);
+	else
+	{
+		char	   *curValLemm = token->lemm;
+		int			curValLenLemm = token->lenlemm;
+		DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+		dict = lookup_ts_dictionary_cache(dictId);
+
+		if (state)
+		{
+			subState = state->subState;
+			state->processed = true;
+		}
+		else
+		{
+			subState.isend = subState.getnext = false;
+			subState.private_state = NULL;
+		}
+
+		res = (TSLexeme *) DatumGetPointer(FunctionCall4(&(dict->lexize),
+														 PointerGetDatum(dict->dictData),
+														 PointerGetDatum(curValLemm),
+														 Int32GetDatum(curValLenLemm),
+														 PointerGetDatum(&subState)
+														 ));
+
+		if (subState.getnext)
+		{
+			/*
+			 * Dictionary wants next word, so store current context and state
+			 * in the DictStateList
+			 */
+			if (state == NULL)
+			{
+				state = palloc0(sizeof(DictState));
+				state->processed = true;
+				state->relatedDictionary = dictId;
+				state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				state->acceptedTokens.head = state->acceptedTokens.tail = NULL;
+				state->tmpResult = NULL;
+
+				/*
+				 * Add state to the list and update pointer in order to work
+				 * with copy from the list
+				 */
+				state = DictStateListAdd(&ld->dslist, state);
+			}
+
+			state->subState = subState;
+			state->storeToAccepted = res != NULL;
+
+			if (res)
+			{
+				if (state->intermediateTokens.head != NULL)
+				{
+					ParsedLex  *ptr = state->intermediateTokens.head;
+
+					while (ptr)
+					{
+						LPLAddTailCopy(&state->acceptedTokens, ptr);
+						ptr = ptr->next;
+					}
+					state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				}
+
+				if (state->tmpResult)
+					pfree(state->tmpResult);
+				TSLexemeMarkMulti(res);
+				state->tmpResult = res;
+				res = NULL;
+			}
+		}
+		else if (state != NULL)
+		{
+			if (res)
+			{
+				if (state)
+					TSLexemeMarkMulti(res);
+				DictStateListRemove(&ld->dslist, dictId);
+			}
+			else
+			{
+				/*
+				 * Trigger post-processing in order to check tmpResult and
+				 * restart processing (see LexizeExec function)
+				 */
+				state->processed = false;
+			}
+		}
+		LexemesBufferAdd(&ld->buffer, dictionary, token, res);
+	}
+
+	return res;
+}
+
+/*
+ * Check is dictionary waits for more tokens or not
+ */
+static bool
+LexizeExecDictionaryWaitNext(LexizeData *ld, Oid dictId)
+{
+	DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+	if (state)
+		return state->subState.getnext;
+	else
+		return false;
+}
+
+/*
+ * Check is dictionary result for current token is NULL or not.
+ * It dictionary waits for more lexemes, the result is interpreted as not null.
+ */
+static bool
+LexizeExecIsNull(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	bool		result = false;
+
+	if (config->type == TSMAP_EXPRESSION)
+	{
+		TSMapExpression *expression = config->value.objectExpression;
+
+		result = LexizeExecIsNull(ld, token, expression->left) || LexizeExecIsNull(ld, token, expression->right);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		Oid			dictOid = config->value.objectDictionary;
+		TSLexeme   *lexemes = LexizeExecDictionary(ld, token, config);
+
+		if (lexemes)
+			result = false;
+		else
+			result = !LexizeExecDictionaryWaitNext(ld, dictOid);
+	}
+	return result;
+}
+
+/*
+ * Execute a MAP operator
+ */
+static TSLexeme *
+TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression)
+{
+	TSLexeme   *left_res;
+	TSLexeme   *result = NULL;
+	int			left_size;
+	int			i;
+
+	left_res = LexizeExecTSElement(ld, token, expression->left);
+	left_size = TSLexemeGetSize(left_res);
+
+	if (left_res == NULL && LexizeExecIsNull(ld, token, expression->left))
+		result = LexizeExecTSElement(ld, token, expression->right);
+	else if (expression->operator == TSMAP_OP_COMMA &&
+			((left_res != NULL && (left_res->flags & TSL_FILTER) == 0) || left_res == NULL))
+		result = left_res;
+	else
+	{
+		TSMapElement *relatedRuleTmp = NULL;
+		relatedRuleTmp = palloc0(sizeof(TSMapElement));
+		relatedRuleTmp->parent = NULL;
+		relatedRuleTmp->type = TSMAP_EXPRESSION;
+		relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+		relatedRuleTmp->value.objectExpression->operator = expression->operator;
+		relatedRuleTmp->value.objectExpression->left = token->relatedRule;
+
+		for (i = 0; i < left_size; i++)
+		{
+			TSLexeme   *tmp_res = NULL;
+			TSLexeme   *prev_res;
+			ParsedLex	tmp_token;
+
+			tmp_token.lemm = left_res[i].lexeme;
+			tmp_token.lenlemm = strlen(left_res[i].lexeme);
+			tmp_token.type = token->type;
+			tmp_token.next = NULL;
+
+			tmp_res = LexizeExecTSElement(ld, &tmp_token, expression->right);
+			relatedRuleTmp->value.objectExpression->right = tmp_token.relatedRule;
+			prev_res = result;
+			result = TSLexemeUnion(prev_res, tmp_res);
+			if (prev_res)
+				pfree(prev_res);
+		}
+		token->relatedRule = relatedRuleTmp;
+	}
+
+	return result;
+}
+
+/*
+ * Execute a TSMapElement
+ * Common point of all possible types of TSMapElement
+ */
+static TSLexeme *
+LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	TSLexeme   *result = NULL;
+
+	if (LexemesBufferContains(&ld->buffer, config, token))
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexemesBufferGet(&ld->buffer, config, token);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexizeExecDictionary(ld, token, config);
+	}
+	else if (config->type == TSMAP_CASE)
+	{
+		TSMapCase  *caseObject = config->value.objectCase;
+		bool		conditionIsNull = LexizeExecIsNull(ld, token, caseObject->condition);
+
+		if ((!conditionIsNull && caseObject->match) || (conditionIsNull && !caseObject->match))
+		{
+			if (caseObject->command->type == TSMAP_KEEP)
+				result = LexizeExecTSElement(ld, token, caseObject->condition);
+			else
+				result = LexizeExecTSElement(ld, token, caseObject->command);
+		}
+		else if (caseObject->elsebranch)
+			result = LexizeExecTSElement(ld, token, caseObject->elsebranch);
+	}
+	else if (config->type == TSMAP_EXPRESSION)
+	{
+		TSLexeme   *resLeft = NULL;
+		TSLexeme   *resRight = NULL;
+		TSMapElement *relatedRuleTmp = NULL;
+		TSMapExpression *expression = config->value.objectExpression;
+
+		if (expression->operator != TSMAP_OP_MAP && expression->operator != TSMAP_OP_COMMA)
+		{
+			if (ld->debugContext)
+			{
+				relatedRuleTmp = palloc0(sizeof(TSMapElement));
+				relatedRuleTmp->parent = NULL;
+				relatedRuleTmp->type = TSMAP_EXPRESSION;
+				relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+				relatedRuleTmp->value.objectExpression->operator = expression->operator;
+			}
 
-	if (list->head)
-		list->head = list->head->next;
+			resLeft = LexizeExecTSElement(ld, token, expression->left);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->left = token->relatedRule;
 
-	if (list->head == NULL)
-		list->tail = NULL;
+			resRight = LexizeExecTSElement(ld, token, expression->right);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->right = token->relatedRule;
+		}
 
-	return res;
-}
+		switch (expression->operator)
+		{
+			case TSMAP_OP_UNION:
+				result = TSLexemeUnion(resLeft, resRight);
+				break;
+			case TSMAP_OP_EXCEPT:
+				result = TSLexemeExcept(resLeft, resRight);
+				break;
+			case TSMAP_OP_INTERSECT:
+				result = TSLexemeIntersect(resLeft, resRight);
+				break;
+			case TSMAP_OP_MAP:
+			case TSMAP_OP_COMMA:
+				result = TSLexemeMap(ld, token, expression);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains invalid expression operator.")));
+				break;
+		}
 
-static void
-LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
-{
-	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+		if (ld->debugContext && relatedRuleTmp != NULL)
+			token->relatedRule = relatedRuleTmp;
+	}
 
-	newpl->type = type;
-	newpl->lemm = lemm;
-	newpl->lenlemm = lenlemm;
-	LPLAddTail(&ld->towork, newpl);
-	ld->curSub = ld->towork.tail;
+	if (!LexemesBufferContains(&ld->buffer, config, token))
+		LexemesBufferAdd(&ld->buffer, config, token, result);
+
+	return result;
 }
 
-static void
-RemoveHead(LexizeData *ld)
+/*-------------------
+ * LexizeExec and helpers functions
+ *-------------------
+ */
+
+/*
+ * Processing of EOF-like token.
+ * Return all temporary results if any are saved.
+ */
+static TSLexeme *
+LexizeExecFinishProcessing(LexizeData *ld)
 {
-	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+	int			i;
+	TSLexeme   *res = NULL;
+
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		TSLexeme   *last_res = res;
 
-	ld->posDict = 0;
+		res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+		if (last_res)
+			pfree(last_res);
+	}
+
+	return res;
 }
 
-static void
-setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+/*
+ * Get last accepted result of the phrase-dictionary
+ */
+static TSLexeme *
+LexizeExecGetPreviousResults(LexizeData *ld)
 {
-	if (correspondLexem)
-	{
-		*correspondLexem = ld->waste.head;
-	}
-	else
-	{
-		ParsedLex  *tmp,
-				   *ptr = ld->waste.head;
+	int			i;
+	TSLexeme   *res = NULL;
 
-		while (ptr)
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		if (!ld->dslist.states[i].processed)
 		{
-			tmp = ptr->next;
-			pfree(ptr);
-			ptr = tmp;
+			TSLexeme   *last_res = res;
+
+			res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+			if (last_res)
+				pfree(last_res);
 		}
 	}
-	ld->waste.head = ld->waste.tail = NULL;
+
+	return res;
 }
 
+/*
+ * Remove all dictionary states which wasn't used for current token
+ */
 static void
-moveToWaste(LexizeData *ld, ParsedLex *stop)
+LexizeExecClearDictStates(LexizeData *ld)
 {
-	bool		go = true;
+	int			i;
 
-	while (ld->towork.head && go)
+	for (i = 0; i < ld->dslist.listLength; i++)
 	{
-		if (ld->towork.head == stop)
+		if (!ld->dslist.states[i].processed)
 		{
-			ld->curSub = stop->next;
-			go = false;
+			DictStateListRemove(&ld->dslist, ld->dslist.states[i].relatedDictionary);
+			i = 0;
 		}
-		RemoveHead(ld);
 	}
 }
 
-static void
-setNewTmpRes(LexizeData *ld, ParsedLex *lex, TSLexeme *res)
+/*
+ * Check if there are any dictionaries that didn't processed current token
+ */
+static bool
+LexizeExecNotProcessedDictStates(LexizeData *ld)
 {
-	if (ld->tmpRes)
-	{
-		TSLexeme   *ptr;
+	int			i;
 
-		for (ptr = ld->tmpRes; ptr->lexeme; ptr++)
-			pfree(ptr->lexeme);
-		pfree(ld->tmpRes);
-	}
-	ld->tmpRes = res;
-	ld->lastRes = lex;
+	for (i = 0; i < ld->dslist.listLength; i++)
+		if (!ld->dslist.states[i].processed)
+			return true;
+
+	return false;
 }
 
+/*
+ * Do a lexize processing for a towork queue in LexizeData
+ */
 static TSLexeme *
 LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 {
+	ParsedLex  *token;
+	TSMapElement *config;
+	TSLexeme   *res = NULL;
+	TSLexeme   *prevIterationResult = NULL;
+	bool		removeHead = false;
+	bool		resetSkipDictionary = false;
+	bool		accepted = false;
 	int			i;
-	ListDictionary *map;
-	TSDictionaryCacheEntry *dict;
-	TSLexeme   *res;
 
-	if (ld->curDictId == InvalidOid)
+	for (i = 0; i < ld->dslist.listLength; i++)
+		ld->dslist.states[i].processed = false;
+	if (ld->skipDictionary != InvalidOid)
+		resetSkipDictionary = true;
+
+	token = ld->towork.head;
+	if (token == NULL)
 	{
-		/*
-		 * usual mode: dictionary wants only one word, but we should keep in
-		 * mind that we should go through all stack
-		 */
+		setCorrLex(ld, correspondLexem);
+		return NULL;
+	}
 
-		while (ld->towork.head)
+	if (token->type >= ld->cfg->lenmap)
+	{
+		removeHead = true;
+	}
+	else
+	{
+		config = ld->cfg->map[token->type];
+		if (config != NULL)
+		{
+			res = LexizeExecTSElement(ld, token, config);
+			prevIterationResult = LexizeExecGetPreviousResults(ld);
+			removeHead = prevIterationResult == NULL;
+		}
+		else
 		{
-			ParsedLex  *curVal = ld->towork.head;
-			char	   *curValLemm = curVal->lemm;
-			int			curValLenLemm = curVal->lenlemm;
+			removeHead = true;
+			if (token->type == 0)	/* Processing EOF-like token */
+			{
+				res = LexizeExecFinishProcessing(ld);
+				prevIterationResult = NULL;
+			}
+		}
 
-			map = ld->cfg->map + curVal->type;
+		if (LexizeExecNotProcessedDictStates(ld) && (token->type == 0 || config != NULL))	/* Rollback processing */
+		{
+			int			i;
+			ListParsedLex *intermediateTokens = NULL;
+			ListParsedLex *acceptedTokens = NULL;
 
-			if (curVal->type == 0 || curVal->type >= ld->cfg->lenmap || map->len == 0)
+			for (i = 0; i < ld->dslist.listLength; i++)
 			{
-				/* skip this type of lexeme */
-				RemoveHead(ld);
-				continue;
+				if (!ld->dslist.states[i].processed)
+				{
+					intermediateTokens = &ld->dslist.states[i].intermediateTokens;
+					acceptedTokens = &ld->dslist.states[i].acceptedTokens;
+					if (prevIterationResult == NULL)
+						ld->skipDictionary = ld->dslist.states[i].relatedDictionary;
+				}
 			}
 
-			for (i = ld->posDict; i < map->len; i++)
+			if (intermediateTokens && intermediateTokens->head)
 			{
-				dict = lookup_ts_dictionary_cache(map->dictIds[i]);
-
-				ld->dictState.isend = ld->dictState.getnext = false;
-				ld->dictState.private_state = NULL;
-				res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-																 &(dict->lexize),
-																 PointerGetDatum(dict->dictData),
-																 PointerGetDatum(curValLemm),
-																 Int32GetDatum(curValLenLemm),
-																 PointerGetDatum(&ld->dictState)
-																 ));
-
-				if (ld->dictState.getnext)
+				ParsedLex  *head = ld->towork.head;
+
+				ld->towork.head = intermediateTokens->head;
+				intermediateTokens->tail->next = head;
+				head->next = NULL;
+				ld->towork.tail = head;
+				removeHead = false;
+				LPLClear(&ld->waste);
+				if (acceptedTokens && acceptedTokens->head)
 				{
-					/*
-					 * dictionary wants next word, so setup and store current
-					 * position and go to multiword mode
-					 */
-
-					ld->curDictId = DatumGetObjectId(map->dictIds[i]);
-					ld->posDict = i + 1;
-					ld->curSub = curVal->next;
-					if (res)
-						setNewTmpRes(ld, curVal, res);
-					return LexizeExec(ld, correspondLexem);
+					ld->waste.head = acceptedTokens->head;
+					ld->waste.tail = acceptedTokens->tail;
 				}
+			}
+			ResultStorageClearLexemes(&ld->delayedResults);
+			if (config != NULL)
+				res = NULL;
+		}
 
-				if (!res)		/* dictionary doesn't know this lexeme */
-					continue;
+		if (config != NULL)
+			LexizeExecClearDictStates(ld);
+		else if (token->type == 0)
+			DictStateListClear(&ld->dslist);
+	}
 
-				if (res->flags & TSL_FILTER)
-				{
-					curValLemm = res->lexeme;
-					curValLenLemm = strlen(res->lexeme);
-					continue;
-				}
+	if (prevIterationResult)
+		res = prevIterationResult;
+	else
+	{
+		int			i;
 
-				RemoveHead(ld);
-				setCorrLex(ld, correspondLexem);
-				return res;
+		for (i = 0; i < ld->dslist.listLength; i++)
+		{
+			if (ld->dslist.states[i].storeToAccepted)
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].acceptedTokens, token);
+				accepted = true;
+				ld->dslist.states[i].storeToAccepted = false;
+			}
+			else
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].intermediateTokens, token);
 			}
-
-			RemoveHead(ld);
 		}
 	}
-	else
-	{							/* curDictId is valid */
-		dict = lookup_ts_dictionary_cache(ld->curDictId);
 
+	if (removeHead)
+		RemoveHead(ld);
+
+	if (ld->dslist.listLength > 0)
+	{
 		/*
-		 * Dictionary ld->curDictId asks  us about following words
+		 * There is at least one thesaurus dictionary in the middle of
+		 * processing. Delay return of the result to avoid wrong lexemes in
+		 * case of thesaurus phrase rejection.
 		 */
+		ResultStorageAdd(&ld->delayedResults, token, res);
+		if (accepted)
+			ResultStorageMoveToAccepted(&ld->delayedResults);
 
-		while (ld->curSub)
+		/*
+		 * Current value of res should not be cleared, because it is stored in
+		 * LexemesBuffer
+		 */
+		res = NULL;
+	}
+	else
+	{
+		if (ld->towork.head == NULL)
 		{
-			ParsedLex  *curVal = ld->curSub;
-
-			map = ld->cfg->map + curVal->type;
-
-			if (curVal->type != 0)
-			{
-				bool		dictExists = false;
-
-				if (curVal->type >= ld->cfg->lenmap || map->len == 0)
-				{
-					/* skip this type of lexeme */
-					ld->curSub = curVal->next;
-					continue;
-				}
+			TSLexeme   *oldAccepted = ld->delayedResults.accepted;
 
-				/*
-				 * We should be sure that current type of lexeme is recognized
-				 * by our dictionary: we just check is it exist in list of
-				 * dictionaries ?
-				 */
-				for (i = 0; i < map->len && !dictExists; i++)
-					if (ld->curDictId == DatumGetObjectId(map->dictIds[i]))
-						dictExists = true;
-
-				if (!dictExists)
-				{
-					/*
-					 * Dictionary can't work with current tpe of lexeme,
-					 * return to basic mode and redo all stored lexemes
-					 */
-					ld->curDictId = InvalidOid;
-					return LexizeExec(ld, correspondLexem);
-				}
-			}
+			ld->delayedResults.accepted = TSLexemeUnionOpt(ld->delayedResults.accepted, ld->delayedResults.lexemes, true);
+			if (oldAccepted)
+				pfree(oldAccepted);
+		}
 
-			ld->dictState.isend = (curVal->type == 0) ? true : false;
-			ld->dictState.getnext = false;
+		/*
+		 * Add accepted delayed results to the output of the parsing. All
+		 * lexemes returned during thesaurus pharse processing should be
+		 * returned simultaneously, since all phrase tokens are processed as
+		 * one.
+		 */
+		if (ld->delayedResults.accepted != NULL)
+		{
+			/*
+			 * Previous value of res should not be cleared, because it is
+			 * stored in LexemesBuffer
+			 */
+			res = TSLexemeUnionOpt(ld->delayedResults.accepted, res, prevIterationResult == NULL);
 
-			res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-															 &(dict->lexize),
-															 PointerGetDatum(dict->dictData),
-															 PointerGetDatum(curVal->lemm),
-															 Int32GetDatum(curVal->lenlemm),
-															 PointerGetDatum(&ld->dictState)
-															 ));
+			ResultStorageClearLexemes(&ld->delayedResults);
+			ResultStorageClearAccepted(&ld->delayedResults);
+		}
+		setCorrLex(ld, correspondLexem);
+	}
 
-			if (ld->dictState.getnext)
-			{
-				/* Dictionary wants one more */
-				ld->curSub = curVal->next;
-				if (res)
-					setNewTmpRes(ld, curVal, res);
-				continue;
-			}
+	if (resetSkipDictionary)
+		ld->skipDictionary = InvalidOid;
 
-			if (res || ld->tmpRes)
-			{
-				/*
-				 * Dictionary normalizes lexemes, so we remove from stack all
-				 * used lexemes, return to basic mode and redo end of stack
-				 * (if it exists)
-				 */
-				if (res)
-				{
-					moveToWaste(ld, ld->curSub);
-				}
-				else
-				{
-					res = ld->tmpRes;
-					moveToWaste(ld, ld->lastRes);
-				}
+	res = TSLexemeFilterMulti(res);
+	if (res)
+		res = TSLexemeRemoveDuplications(res);
 
-				/* reset to initial state */
-				ld->curDictId = InvalidOid;
-				ld->posDict = 0;
-				ld->lastRes = NULL;
-				ld->tmpRes = NULL;
-				setCorrLex(ld, correspondLexem);
-				return res;
-			}
+	/*
+	 * Copy result since it may be stored in LexemesBuffere and removed at the
+	 * next step.
+	 */
+	if (res)
+	{
+		TSLexeme   *oldRes = res;
+		int			resSize = TSLexemeGetSize(res);
 
-			/*
-			 * Dict don't want next lexem and didn't recognize anything, redo
-			 * from ld->towork.head
-			 */
-			ld->curDictId = InvalidOid;
-			return LexizeExec(ld, correspondLexem);
-		}
+		res = palloc0(sizeof(TSLexeme) * (resSize + 1));
+		memcpy(res, oldRes, sizeof(TSLexeme) * resSize);
 	}
 
-	setCorrLex(ld, correspondLexem);
-	return NULL;
+	LexemesBufferClear(&ld->buffer);
+	return res;
 }
 
+/*-------------------
+ * ts_parse API functions
+ *-------------------
+ */
+
 /*
  * Parse string and lexize words.
  *
@@ -357,7 +1446,7 @@ LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 void
 parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
@@ -375,36 +1464,42 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		while ((norms = LexizeExec(&ldata, NULL)) != NULL)
 		{
-			TSLexeme   *ptr = norms;
+			TSLexeme   *ptr;
+
+			ptr = norms;
 
 			prs->pos++;			/* set pos */
 
@@ -429,14 +1524,246 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 			}
 			pfree(norms);
 		}
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
 
+/*-------------------
+ * ts_debug and helper functions
+ *-------------------
+ */
+
+/*
+ * Free memory occupied by temporary TSMapElement
+ */
+
+static void
+ts_debug_free_rule(TSMapElement *element)
+{
+	if (element != NULL && element->type == TSMAP_EXPRESSION)
+	{
+		ts_debug_free_rule(element->value.objectExpression->left);
+		ts_debug_free_rule(element->value.objectExpression->right);
+		pfree(element->value.objectExpression);
+		pfree(element);
+	}
+}
+
+/*
+ * Initialize SRF context and text parser for ts_debug execution.
+ */
+static void
+ts_debug_init(Oid cfgId, text *inputText, FunctionCallInfo fcinfo)
+{
+	TupleDesc	tupdesc;
+	char	   *buf;
+	int			buflen;
+	FuncCallContext *funcctx;
+	MemoryContext oldcontext;
+	TSDebugContext *context;
+
+	funcctx = SRF_FIRSTCALL_INIT();
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+	buf = text_to_cstring(inputText);
+	buflen = strlen(buf);
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("function returning record called in context "
+						"that cannot accept type record")));
+
+	funcctx->user_fctx = palloc0(sizeof(TSDebugContext));
+	funcctx->attinmeta = TupleDescGetAttInMetadata(tupdesc);
+
+	context = funcctx->user_fctx;
+	context->cfg = lookup_ts_config_cache(cfgId);
+	context->prsobj = lookup_ts_parser_cache(context->cfg->prsId);
+
+	context->tokenTypes = (LexDescr *) DatumGetPointer(OidFunctionCall1(context->prsobj->lextypeOid,
+																		(Datum) 0));
+
+	context->prsdata = (void *) DatumGetPointer(FunctionCall2(&context->prsobj->prsstart,
+															  PointerGetDatum(buf),
+															  Int32GetDatum(buflen)));
+	LexizeInit(&context->ldata, context->cfg);
+	context->ldata.debugContext = true;
+	context->tokentype = 1;
+
+	MemoryContextSwitchTo(oldcontext);
+}
+
+/*
+ * Get one token from input text and add it to processing queue.
+ */
+static void
+ts_debug_get_token(FuncCallContext *funcctx)
+{
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+	int			lenlemm;
+	char	   *lemm = NULL;
+
+	context = funcctx->user_fctx;
+
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+	context->tokentype = DatumGetInt32(FunctionCall3(&(context->prsobj->prstoken),
+													 PointerGetDatum(context->prsdata),
+													 PointerGetDatum(&lemm),
+													 PointerGetDatum(&lenlemm)));
+
+	if (context->tokentype > 0 && lenlemm >= MAXSTRLEN)
+	{
+#ifdef IGNORE_LONGLEXEME
+		ereport(NOTICE,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#else
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#endif
+	}
+
+	LexizeAddLemm(&context->ldata, context->tokentype, lemm, lenlemm);
+	MemoryContextSwitchTo(oldcontext);
+}
+
 /*
+ * Parse text and print debug information, such as token type, dictionary map
+ * configuration, selected command and lexemes for each token.
+ * Arguments: regconfiguration(Oid) cfgId, text *inputText
+ */
+Datum
+ts_debug(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		Oid			cfgId = PG_GETARG_OID(0);
+		text	   *inputText = PG_GETARG_TEXT_P(1);
+
+		ts_debug_init(cfgId, inputText, fcinfo);
+	}
+
+	funcctx = SRF_PERCALL_SETUP();
+	context = funcctx->user_fctx;
+
+	while (context->tokentype > 0 && context->leftTokens == NULL)
+	{
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+		ts_debug_get_token(funcctx);
+
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	while (context->leftTokens == NULL && context->ldata.towork.head != NULL)
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+	if (context->leftTokens && context->leftTokens && context->leftTokens->type > 0)
+	{
+		HeapTuple	tuple;
+		Datum		result;
+		char	  **values;
+		ParsedLex  *lex = context->leftTokens;
+		StringInfo	str = NULL;
+		TSLexeme   *ptr;
+
+		values = palloc0(sizeof(char *) * 7);
+		str = makeStringInfo();
+		initStringInfo(str);
+
+		values[0] = context->tokenTypes[lex->type - 1].alias;
+		values[1] = context->tokenTypes[lex->type - 1].descr;
+
+		values[2] = palloc0(sizeof(char) * (lex->lenlemm + 1));
+		memcpy(values[2], lex->lemm, sizeof(char) * lex->lenlemm);
+
+		initStringInfo(str);
+		appendStringInfoChar(str, '{');
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			Oid *dictionaries = TSMapGetDictionaries(context->ldata.cfg->map[lex->type]);
+			Oid *currentDictionary = NULL;
+			for (currentDictionary = dictionaries; *currentDictionary != InvalidOid; currentDictionary++)
+			{
+				if (currentDictionary != dictionaries)
+					appendStringInfoChar(str, ',');
+
+				TSMapPrintDictName(*currentDictionary, str);
+			}
+		}
+		appendStringInfoChar(str, '}');
+		values[3] = str->data;
+
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			initStringInfo(str);
+			TSMapPrintElement(context->ldata.cfg->map[lex->type], str);
+			values[4] = str->data;
+
+			initStringInfo(str);
+			if (lex->relatedRule)
+			{
+				TSMapPrintElement(lex->relatedRule, str);
+				values[5] = str->data;
+				str = makeStringInfo();
+				initStringInfo(str);
+				ts_debug_free_rule(lex->relatedRule);
+				lex->relatedRule = NULL;
+			}
+		}
+
+		initStringInfo(str);
+		ptr = context->savedLexemes;
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '{');
+
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr != context->savedLexemes)
+				appendStringInfoString(str, ", ");
+			appendStringInfoString(str, ptr->lexeme);
+			ptr++;
+		}
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '}');
+		if (context->savedLexemes)
+			values[6] = str->data;
+		else
+			values[6] = NULL;
+
+		tuple = BuildTupleFromCStrings(funcctx->attinmeta, values);
+		result = HeapTupleGetDatum(tuple);
+
+		context->leftTokens = lex->next;
+		pfree(lex);
+		if (context->leftTokens == NULL && context->savedLexemes)
+			pfree(context->savedLexemes);
+
+		SRF_RETURN_NEXT(funcctx, result);
+	}
+
+	FunctionCall1(&(context->prsobj->prsend), PointerGetDatum(context->prsdata));
+	SRF_RETURN_DONE(funcctx);
+}
+
+/*-------------------
  * Headline framework
+ *-------------------
  */
+
 static void
 hladdword(HeadlineParsedText *prs, char *buf, int buflen, int type)
 {
@@ -532,12 +1859,12 @@ addHLParsedLex(HeadlineParsedText *prs, TSQuery query, ParsedLex *lexs, TSLexeme
 void
 hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
 	TSLexeme   *norms;
-	ParsedLex  *lexs;
+	ParsedLex  *lexs = NULL;
 	TSConfigCacheEntry *cfg;
 	TSParserCacheEntry *prsobj;
 	void	   *prsdata;
@@ -551,32 +1878,36 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		do
 		{
@@ -587,9 +1918,10 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 			}
 			else
 				addHLParsedLex(prs, query, lexs, NULL);
+			lexs = NULL;
 		} while (norms);
 
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
@@ -642,14 +1974,14 @@ generateHeadline(HeadlineParsedText *prs)
 			}
 			else if (!wrd->skip)
 			{
-				if (wrd->selected)
+				if (wrd->selected && (wrd == prs->words || !(wrd - 1)->selected))
 				{
 					memcpy(ptr, prs->startsel, prs->startsellen);
 					ptr += prs->startsellen;
 				}
 				memcpy(ptr, wrd->word, wrd->len);
 				ptr += wrd->len;
-				if (wrd->selected)
+				if (wrd->selected && ((wrd + 1 - prs->words) == prs->curwords || !(wrd + 1)->selected))
 				{
 					memcpy(ptr, prs->stopsel, prs->stopsellen);
 					ptr += prs->stopsellen;
diff --git a/src/backend/tsearch/ts_utils.c b/src/backend/tsearch/ts_utils.c
index f6e03ae..0dd846b 100644
--- a/src/backend/tsearch/ts_utils.c
+++ b/src/backend/tsearch/ts_utils.c
@@ -20,7 +20,6 @@
 #include "tsearch/ts_locale.h"
 #include "tsearch/ts_utils.h"
 
-
 /*
  * Given the base name and extension of a tsearch config file, return
  * its full path name.  The base name is assumed to be user-supplied,
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 041cd53..aa1e8b6 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -828,11 +828,10 @@ static const struct cachedesc cacheinfo[] = {
 	},
 	{TSConfigMapRelationId,		/* TSCONFIGMAP */
 		TSConfigMapIndexId,
-		3,
+		2,
 		{
 			Anum_pg_ts_config_map_mapcfg,
 			Anum_pg_ts_config_map_maptokentype,
-			Anum_pg_ts_config_map_mapseqno,
 			0
 		},
 		2
diff --git a/src/backend/utils/cache/ts_cache.c b/src/backend/utils/cache/ts_cache.c
index 3d5c194..ab6ca4b 100644
--- a/src/backend/utils/cache/ts_cache.c
+++ b/src/backend/utils/cache/ts_cache.c
@@ -39,6 +39,7 @@
 #include "catalog/pg_ts_template.h"
 #include "commands/defrem.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/catcache.h"
 #include "utils/fmgroids.h"
@@ -51,13 +52,12 @@
 
 
 /*
- * MAXTOKENTYPE/MAXDICTSPERTT are arbitrary limits on the workspace size
+ * MAXTOKENTYPE is arbitrary limits on the workspace size
  * used in lookup_ts_config_cache().  We could avoid hardwiring a limit
  * by making the workspace dynamically enlargeable, but it seems unlikely
  * to be worth the trouble.
  */
-#define MAXTOKENTYPE	256
-#define MAXDICTSPERTT	100
+#define MAXTOKENTYPE		256
 
 
 static HTAB *TSParserCacheHash = NULL;
@@ -415,11 +415,10 @@ lookup_ts_config_cache(Oid cfgId)
 		ScanKeyData mapskey;
 		SysScanDesc mapscan;
 		HeapTuple	maptup;
-		ListDictionary maplists[MAXTOKENTYPE + 1];
-		Oid			mapdicts[MAXDICTSPERTT];
+		TSMapElement *mapconfigs[MAXTOKENTYPE + 1];
 		int			maxtokentype;
-		int			ndicts;
 		int			i;
+		TSMapElement *tmpConfig;
 
 		tp = SearchSysCache1(TSCONFIGOID, ObjectIdGetDatum(cfgId));
 		if (!HeapTupleIsValid(tp))
@@ -450,8 +449,10 @@ lookup_ts_config_cache(Oid cfgId)
 			if (entry->map)
 			{
 				for (i = 0; i < entry->lenmap; i++)
-					if (entry->map[i].dictIds)
-						pfree(entry->map[i].dictIds);
+				{
+					if (entry->map[i])
+						TSMapElementFree(entry->map[i]);
+				}
 				pfree(entry->map);
 			}
 		}
@@ -465,13 +466,11 @@ lookup_ts_config_cache(Oid cfgId)
 		/*
 		 * Scan pg_ts_config_map to gather dictionary list for each token type
 		 *
-		 * Because the index is on (mapcfg, maptokentype, mapseqno), we will
-		 * see the entries in maptokentype order, and in mapseqno order for
-		 * each token type, even though we didn't explicitly ask for that.
+		 * Because the index is on (mapcfg, maptokentype), we will see the
+		 * entries in maptokentype order even though we didn't explicitly ask
+		 * for that.
 		 */
-		MemSet(maplists, 0, sizeof(maplists));
 		maxtokentype = 0;
-		ndicts = 0;
 
 		ScanKeyInit(&mapskey,
 					Anum_pg_ts_config_map_mapcfg,
@@ -483,6 +482,7 @@ lookup_ts_config_cache(Oid cfgId)
 		mapscan = systable_beginscan_ordered(maprel, mapidx,
 											 NULL, 1, &mapskey);
 
+		memset(mapconfigs, 0, sizeof(mapconfigs));
 		while ((maptup = systable_getnext_ordered(mapscan, ForwardScanDirection)) != NULL)
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
@@ -492,51 +492,27 @@ lookup_ts_config_cache(Oid cfgId)
 				elog(ERROR, "maptokentype value %d is out of range", toktype);
 			if (toktype < maxtokentype)
 				elog(ERROR, "maptokentype entries are out of order");
-			if (toktype > maxtokentype)
-			{
-				/* starting a new token type, but first save the prior data */
-				if (ndicts > 0)
-				{
-					maplists[maxtokentype].len = ndicts;
-					maplists[maxtokentype].dictIds = (Oid *)
-						MemoryContextAlloc(CacheMemoryContext,
-										   sizeof(Oid) * ndicts);
-					memcpy(maplists[maxtokentype].dictIds, mapdicts,
-						   sizeof(Oid) * ndicts);
-				}
-				maxtokentype = toktype;
-				mapdicts[0] = cfgmap->mapdict;
-				ndicts = 1;
-			}
-			else
-			{
-				/* continuing data for current token type */
-				if (ndicts >= MAXDICTSPERTT)
-					elog(ERROR, "too many pg_ts_config_map entries for one token type");
-				mapdicts[ndicts++] = cfgmap->mapdict;
-			}
+
+			maxtokentype = toktype;
+			tmpConfig = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			mapconfigs[maxtokentype] = TSMapMoveToMemoryContext(tmpConfig, CacheMemoryContext);
+			TSMapElementFree(tmpConfig);
+			tmpConfig = NULL;
 		}
 
 		systable_endscan_ordered(mapscan);
 		index_close(mapidx, AccessShareLock);
 		heap_close(maprel, AccessShareLock);
 
-		if (ndicts > 0)
+		if (maxtokentype > 0)
 		{
-			/* save the last token type's dictionaries */
-			maplists[maxtokentype].len = ndicts;
-			maplists[maxtokentype].dictIds = (Oid *)
-				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(Oid) * ndicts);
-			memcpy(maplists[maxtokentype].dictIds, mapdicts,
-				   sizeof(Oid) * ndicts);
-			/* and save the overall map */
+			/* save the overall map */
 			entry->lenmap = maxtokentype + 1;
-			entry->map = (ListDictionary *)
+			entry->map = (TSMapElement * *)
 				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(ListDictionary) * entry->lenmap);
-			memcpy(entry->map, maplists,
-				   sizeof(ListDictionary) * entry->lenmap);
+								   sizeof(TSMapElement *) * entry->lenmap);
+			memcpy(entry->map, mapconfigs,
+				   sizeof(TSMapElement *) * entry->lenmap);
 		}
 
 		entry->isvalid = true;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 27628a3..102bf44 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -14208,10 +14208,11 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 					  "SELECT\n"
 					  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
 					  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
-					  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
+					  "  dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS dictname\n"
 					  "FROM pg_catalog.pg_ts_config_map AS m\n"
 					  "WHERE m.mapcfg = '%u'\n"
-					  "ORDER BY m.mapcfg, m.maptokentype, m.mapseqno",
+					  "GROUP BY m.mapcfg, m.maptokentype\n"
+					  "ORDER BY m.mapcfg, m.maptokentype",
 					  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -14225,20 +14226,14 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 		char	   *tokenname = PQgetvalue(res, i, i_tokenname);
 		char	   *dictname = PQgetvalue(res, i, i_dictname);
 
-		if (i == 0 ||
-			strcmp(tokenname, PQgetvalue(res, i - 1, i_tokenname)) != 0)
-		{
-			/* starting a new token type, so start a new command */
-			if (i > 0)
-				appendPQExpBufferStr(q, ";\n");
-			appendPQExpBuffer(q, "\nALTER TEXT SEARCH CONFIGURATION %s\n",
-							  fmtId(cfginfo->dobj.name));
-			/* tokenname needs quoting, dictname does NOT */
-			appendPQExpBuffer(q, "    ADD MAPPING FOR %s WITH %s",
-							  fmtId(tokenname), dictname);
-		}
-		else
-			appendPQExpBuffer(q, ", %s", dictname);
+		/* starting a new token type, so start a new command */
+		if (i > 0)
+			appendPQExpBufferStr(q, ";\n");
+		appendPQExpBuffer(q, "\nALTER TEXT SEARCH CONFIGURATION %s\n",
+						  fmtId(cfginfo->dobj.name));
+		/* tokenname needs quoting, dictname does NOT */
+		appendPQExpBuffer(q, "    ADD MAPPING FOR %s WITH %s",
+						  fmtId(tokenname), dictname);
 	}
 
 	if (ntups > 0)
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index f2e6294..d780f3a 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -4605,13 +4605,7 @@ describeOneTSConfig(const char *oid, const char *nspname, const char *cfgname,
 					  "  ( SELECT t.alias FROM\n"
 					  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
 					  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
-					  "  pg_catalog.btrim(\n"
-					  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
-					  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
-					  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
-					  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
-					  "    ) :: pg_catalog.text,\n"
-					  "  '{}') AS \"%s\"\n"
+					  " dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS \"%s\"\n"
 					  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
 					  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
 					  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index 0bb8754..1dd4938 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -260,7 +260,7 @@ DECLARE_UNIQUE_INDEX(pg_ts_config_cfgname_index, 3608, on pg_ts_config using btr
 DECLARE_UNIQUE_INDEX(pg_ts_config_oid_index, 3712, on pg_ts_config using btree(oid oid_ops));
 #define TSConfigOidIndexId	3712
 
-DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops, mapseqno int4_ops));
+DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops));
 #define TSConfigMapIndexId	3609
 
 DECLARE_UNIQUE_INDEX(pg_ts_dict_dictname_index, 3604, on pg_ts_dict using btree(dictname name_ops, dictnamespace oid_ops));
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 298e0ae..0ca2fad 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -4925,6 +4925,12 @@ DESCR("transform jsonb to tsvector");
 DATA(insert OID = 4212 (  to_tsvector		PGNSP PGUID 12 100 0 0 0 f f f f t f i s 2 0 3614 "3734 114" _null_ _null_ _null_ _null_ _null_ json_to_tsvector_byid _null_ _null_ _null_ ));
 DESCR("transform json to tsvector");
 
+DATA(insert OID = 8891 (  dictionary_mapping_to_text	PGNSP PGUID 12 100 0 0 0 f f f f t f s s 2 0 25 "26 23" _null_ _null_ _null_ _null_ _null_ dictionary_mapping_to_text _null_ _null_ _null_ ));
+DESCR("returns text representation of dictionary configuration map");
+
+DATA(insert OID = 8892 (  ts_debug			PGNSP PGUID 12 100 1 0 0 f f f f t t s s 2 0 2249 "3734 25" "{3734,25,25,25,25,3770,25,25,1009}" "{i,i,o,o,o,o,o,o,o}" "{cfgId,inputText,alias,description,token,dictionaries,configuration,command,lexemes}" _null_ _null_ ts_debug _null_ _null_ _null_));
+DESCR("debug function for text search configuration");
+
 DATA(insert OID = 3752 (  tsvector_update_trigger			PGNSP PGUID 12 1 0 0 0 f f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_byid _null_ _null_ _null_ ));
 DESCR("trigger for automatic update of tsvector column");
 DATA(insert OID = 3753 (  tsvector_update_trigger_column	PGNSP PGUID 12 1 0 0 0 f f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_bycolumn _null_ _null_ _null_ ));
diff --git a/src/include/catalog/pg_ts_config_map.h b/src/include/catalog/pg_ts_config_map.h
index a3d9e3f..9362882 100644
--- a/src/include/catalog/pg_ts_config_map.h
+++ b/src/include/catalog/pg_ts_config_map.h
@@ -22,6 +22,7 @@
 #define PG_TS_CONFIG_MAP_H
 
 #include "catalog/genbki.h"
+#include "utils/jsonb.h"
 
 /* ----------------
  *		pg_ts_config_map definition.  cpp turns this into
@@ -30,49 +31,99 @@
  */
 #define TSConfigMapRelationId	3603
 
+/* Create a typedef in order to use same type name in
+ * generated DB initialization script and C source code
+ */
+typedef Jsonb jsonb;
+
 CATALOG(pg_ts_config_map,3603) BKI_WITHOUT_OIDS
 {
 	Oid			mapcfg;			/* OID of configuration owning this entry */
 	int32		maptokentype;	/* token type from parser */
-	int32		mapseqno;		/* order in which to consult dictionaries */
-	Oid			mapdict;		/* dictionary to consult */
+	jsonb		mapdicts;		/* dictionary map Jsonb representation */
 } FormData_pg_ts_config_map;
 
 typedef FormData_pg_ts_config_map *Form_pg_ts_config_map;
 
+typedef struct TSMapElement
+{
+	int			type;
+	union
+	{
+		struct TSMapExpression *objectExpression;
+		struct TSMapCase *objectCase;
+		Oid			objectDictionary;
+		void	   *object;
+	}			value;
+	struct TSMapElement *parent;
+} TSMapElement;
+
+typedef struct TSMapExpression
+{
+	int			operator;
+	TSMapElement *left;
+	TSMapElement *right;
+} TSMapExpression;
+
+typedef struct TSMapCase
+{
+	TSMapElement *condition;
+	TSMapElement *command;
+	TSMapElement *elsebranch;
+	bool		match;	/* If false, NO MATCH is used */
+} TSMapCase;
+
 /* ----------------
- *		compiler constants for pg_ts_config_map
+ *		Compiler constants for pg_ts_config_map
  * ----------------
  */
-#define Natts_pg_ts_config_map				4
+#define Natts_pg_ts_config_map				3
 #define Anum_pg_ts_config_map_mapcfg		1
 #define Anum_pg_ts_config_map_maptokentype	2
-#define Anum_pg_ts_config_map_mapseqno		3
-#define Anum_pg_ts_config_map_mapdict		4
+#define Anum_pg_ts_config_map_mapdicts		3
+
+/* ----------------
+ *		Dictionary map operators
+ * ----------------
+ */
+#define TSMAP_OP_MAP			1
+#define TSMAP_OP_UNION			2
+#define TSMAP_OP_EXCEPT			3
+#define TSMAP_OP_INTERSECT		4
+#define TSMAP_OP_COMMA			5
+
+/* ----------------
+ *		TSMapElement object types
+ * ----------------
+ */
+#define TSMAP_EXPRESSION	1
+#define TSMAP_CASE			2
+#define TSMAP_DICTIONARY	3
+#define TSMAP_KEEP			4
 
 /* ----------------
  *		initial contents of pg_ts_config_map
  * ----------------
  */
 
-DATA(insert ( 3748	1	1	3765 ));
-DATA(insert ( 3748	2	1	3765 ));
-DATA(insert ( 3748	3	1	3765 ));
-DATA(insert ( 3748	4	1	3765 ));
-DATA(insert ( 3748	5	1	3765 ));
-DATA(insert ( 3748	6	1	3765 ));
-DATA(insert ( 3748	7	1	3765 ));
-DATA(insert ( 3748	8	1	3765 ));
-DATA(insert ( 3748	9	1	3765 ));
-DATA(insert ( 3748	10	1	3765 ));
-DATA(insert ( 3748	11	1	3765 ));
-DATA(insert ( 3748	15	1	3765 ));
-DATA(insert ( 3748	16	1	3765 ));
-DATA(insert ( 3748	17	1	3765 ));
-DATA(insert ( 3748	18	1	3765 ));
-DATA(insert ( 3748	19	1	3765 ));
-DATA(insert ( 3748	20	1	3765 ));
-DATA(insert ( 3748	21	1	3765 ));
-DATA(insert ( 3748	22	1	3765 ));
+DATA(insert ( 3748	1	"[3765]" ));
+DATA(insert ( 3748	2	"[3765]" ));
+DATA(insert ( 3748	3	"[3765]" ));
+DATA(insert ( 3748	4	"[3765]" ));
+DATA(insert ( 3748	5	"[3765]" ));
+DATA(insert ( 3748	6	"[3765]" ));
+DATA(insert ( 3748	7	"[3765]" ));
+DATA(insert ( 3748	8	"[3765]" ));
+DATA(insert ( 3748	9	"[3765]" ));
+DATA(insert ( 3748	10	"[3765]" ));
+DATA(insert ( 3748	11	"[3765]" ));
+DATA(insert ( 3748	15	"[3765]" ));
+DATA(insert ( 3748	16	"[3765]" ));
+DATA(insert ( 3748	17	"[3765]" ));
+DATA(insert ( 3748	18	"[3765]" ));
+DATA(insert ( 3748	19	"[3765]" ));
+DATA(insert ( 3748	20	"[3765]" ));
+DATA(insert ( 3748	21	"[3765]" ));
+DATA(insert ( 3748	22	"[3765]" ));
 
 #endif							/* PG_TS_CONFIG_MAP_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 2eb3d6d..c69ac7f 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -381,6 +381,9 @@ typedef enum NodeTag
 	T_CreateEnumStmt,
 	T_CreateRangeStmt,
 	T_AlterEnumStmt,
+	T_DictMapExprElem,
+	T_DictMapElem,
+	T_DictMapCase,
 	T_AlterTSDictionaryStmt,
 	T_AlterTSConfigurationStmt,
 	T_CreateFdwStmt,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index b72178e..d46db7b 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3392,6 +3392,38 @@ typedef enum AlterTSConfigType
 	ALTER_TSCONFIG_DROP_MAPPING
 } AlterTSConfigType;
 
+typedef enum DictMapElemType
+{
+	DICT_MAP_CASE,
+	DICT_MAP_EXPRESSION,
+	DICT_MAP_KEEP,
+	DICT_MAP_DICTIONARY
+} DictMapElemType;
+
+typedef struct DictMapElem
+{
+	NodeTag		type;
+	int8		kind;			/* See DictMapElemType */
+	void	   *data;			/* Type should be detected by kind value */
+} DictMapElem;
+
+typedef struct DictMapExprElem
+{
+	NodeTag		type;
+	DictMapElem *left;
+	DictMapElem *right;
+	int8		oper;
+} DictMapExprElem;
+
+typedef struct DictMapCase
+{
+	NodeTag		type;
+	struct DictMapElem *condition;
+	struct DictMapElem *command;
+	struct DictMapElem *elsebranch;
+	bool		match;
+} DictMapCase;
+
 typedef struct AlterTSConfigurationStmt
 {
 	NodeTag		type;
@@ -3404,6 +3436,7 @@ typedef struct AlterTSConfigurationStmt
 	 */
 	List	   *tokentype;		/* list of Value strings */
 	List	   *dicts;			/* list of list of Value strings */
+	DictMapElem *dict_map;
 	bool		override;		/* if true - remove old variant */
 	bool		replace;		/* if true - replace dictionary by another */
 	bool		missing_ok;		/* for DROP - skip error if missing? */
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 26af944..f56af7e 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -219,6 +219,7 @@ PG_KEYWORD("is", IS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isnull", ISNULL, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isolation", ISOLATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("join", JOIN, TYPE_FUNC_NAME_KEYWORD)
+PG_KEYWORD("keep", KEEP, RESERVED_KEYWORD)
 PG_KEYWORD("key", KEY, UNRESERVED_KEYWORD)
 PG_KEYWORD("label", LABEL, UNRESERVED_KEYWORD)
 PG_KEYWORD("language", LANGUAGE, UNRESERVED_KEYWORD)
@@ -241,6 +242,7 @@ PG_KEYWORD("location", LOCATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("lock", LOCK_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("locked", LOCKED, UNRESERVED_KEYWORD)
 PG_KEYWORD("logged", LOGGED, UNRESERVED_KEYWORD)
+PG_KEYWORD("map", MAP, UNRESERVED_KEYWORD)
 PG_KEYWORD("mapping", MAPPING, UNRESERVED_KEYWORD)
 PG_KEYWORD("match", MATCH, UNRESERVED_KEYWORD)
 PG_KEYWORD("materialized", MATERIALIZED, UNRESERVED_KEYWORD)
diff --git a/src/include/tsearch/ts_cache.h b/src/include/tsearch/ts_cache.h
index 410f1d5..4633dd7 100644
--- a/src/include/tsearch/ts_cache.h
+++ b/src/include/tsearch/ts_cache.h
@@ -14,6 +14,7 @@
 #define TS_CACHE_H
 
 #include "utils/guc.h"
+#include "catalog/pg_ts_config_map.h"
 
 
 /*
@@ -66,6 +67,7 @@ typedef struct
 {
 	int			len;
 	Oid		   *dictIds;
+	int32	   *dictOptions;
 } ListDictionary;
 
 typedef struct
@@ -77,7 +79,7 @@ typedef struct
 	Oid			prsId;
 
 	int			lenmap;
-	ListDictionary *map;
+	TSMapElement **map;
 } TSConfigCacheEntry;
 
 
diff --git a/src/include/tsearch/ts_configmap.h b/src/include/tsearch/ts_configmap.h
new file mode 100644
index 0000000..c95b3f3
--- /dev/null
+++ b/src/include/tsearch/ts_configmap.h
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.h
+ *	  internal representation of text search configuration and utilities for it
+ *
+ * Copyright (c) 1998-2017, PostgreSQL Global Development Group
+ *
+ * src/include/tsearch/ts_utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PG_TS_CONFIGMAP_H_
+#define _PG_TS_CONFIGMAP_H_
+
+#include "utils/jsonb.h"
+#include "catalog/pg_ts_config_map.h"
+
+/*
+ * Configuration storage functions
+ * Provide interface to convert ts_configuration into JSONB and vice versa
+ */
+
+/* Convert TSMapElement structure into JSONB */
+extern Jsonb *TSMapToJsonb(TSMapElement *config);
+
+/* Extract TSMapElement from JSONB formated data */
+extern TSMapElement *JsonbToTSMap(Jsonb *json);
+/* Replace all occurances of oldDict by newDict */
+extern void TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict);
+
+/* Move rule list into specified memory context */
+extern TSMapElement *TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context);
+/* Free all nodes of the rule list */
+extern void TSMapElementFree(TSMapElement *element);
+
+/* Print map in human-readable format */
+extern void TSMapPrintElement(TSMapElement *config, StringInfo result);
+
+/* Print dictionary name for a given Oid */
+extern void TSMapPrintDictName(Oid dictId, StringInfo result);
+
+/* Return all dictionaries used in config */
+extern Oid *TSMapGetDictionaries(TSMapElement *config);
+
+/* Do a deep comparison of two TSMapElements. Doesn't check parents of elements */
+extern bool TSMapElementEquals(TSMapElement *a, TSMapElement *b);
+
+#endif							/* _PG_TS_CONFIGMAP_H_ */
diff --git a/src/include/tsearch/ts_public.h b/src/include/tsearch/ts_public.h
index 0b7a5aa..d970eec 100644
--- a/src/include/tsearch/ts_public.h
+++ b/src/include/tsearch/ts_public.h
@@ -115,6 +115,7 @@ typedef struct
 #define TSL_ADDPOS		0x01
 #define TSL_PREFIX		0x02
 #define TSL_FILTER		0x04
+#define TSL_MULTI		0x08
 
 /*
  * Struct for supporting complex dictionaries like thesaurus.
diff --git a/src/test/regress/expected/oidjoins.out b/src/test/regress/expected/oidjoins.out
index 234b44f..40029f3 100644
--- a/src/test/regress/expected/oidjoins.out
+++ b/src/test/regress/expected/oidjoins.out
@@ -1081,14 +1081,6 @@ WHERE	mapcfg != 0 AND
 ------+--------
 (0 rows)
 
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
- ctid | mapdict 
-------+---------
-(0 rows)
-
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/expected/tsdicts.out b/src/test/regress/expected/tsdicts.out
index 0744ef8..f7d966f 100644
--- a/src/test/regress/expected/tsdicts.out
+++ b/src/test/regress/expected/tsdicts.out
@@ -420,6 +420,105 @@ SELECT ts_lexize('thesaurus', 'one');
  {1}
 (1 row)
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+SELECT to_tsvector('english_union', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'books');
+    to_tsvector     
+--------------------
+ 'book':1 'books':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+SELECT to_tsvector('english_intersect', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'books');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'booking');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+SELECT to_tsvector('english_except', 'book');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_except', 'books');
+ to_tsvector 
+-------------
+ 'books':1
+(1 row)
+
+SELECT to_tsvector('english_except', 'booking');
+ to_tsvector 
+-------------
+ 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+SELECT to_tsvector('english_branches', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -580,3 +679,55 @@ SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a
  'card':3,10 'invit':2,9 'like':6 'look':5 'order':1,8
 (1 row)
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                                     to_tsvector                                      
+--------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+              to_tsvector              
+---------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                     to_tsvector                     
+-----------------------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5 'supernova':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                              to_tsvector                               
+------------------------------------------------------------------------
+ '1987a':6 'mysterious':2 'of':4 'rings':3 'sn':5 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+      to_tsvector       
+------------------------
+ '12':1 'one':1 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+            to_tsvector            
+-----------------------------------
+ '123':1 'one':1 'three':3 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+           to_tsvector           
+---------------------------------
+ '12':1 'four':3 'one':1 'two':2
+(1 row)
+
diff --git a/src/test/regress/expected/tsearch.out b/src/test/regress/expected/tsearch.out
index d63fb12..c0e9fc5 100644
--- a/src/test/regress/expected/tsearch.out
+++ b/src/test/regress/expected/tsearch.out
@@ -36,11 +36,11 @@ WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 -----+---------
 (0 rows)
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
- mapcfg | maptokentype | mapseqno 
---------+--------------+----------
+WHERE mapcfg = 0;
+ mapcfg | maptokentype 
+--------+--------------
 (0 rows)
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
@@ -51,8 +51,8 @@ RIGHT JOIN pg_ts_config_map AS m
     ON (tt.cfgid=m.mapcfg AND tt.tokid=m.maptokentype)
 WHERE
     tt.cfgid IS NULL OR tt.tokid IS NULL;
- cfgid | tokid | mapcfg | maptokentype | mapseqno | mapdict 
--------+-------+--------+--------------+----------+---------
+ cfgid | tokid | mapcfg | maptokentype | mapdicts 
+-------+-------+--------+--------------+----------
 (0 rows)
 
 -- test basic text search behavior without indexes, then with
@@ -567,55 +567,55 @@ SELECT length(to_tsvector('english', '345 qwe@efd.r '' http://www.com/ http://ae
 
 -- ts_debug
 SELECT * from ts_debug('english', '<myns:foo-bar_baz.blurfl>abc&nm1;def&#xa9;ghi&#245;jkl</myns:foo-bar_baz.blurfl>');
-   alias   |   description   |           token            |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+----------------------------+----------------+--------------+---------
- tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |              | 
- asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem | {abc}
- entity    | XML entity      | &nm1;                      | {}             |              | 
- asciiword | Word, all ASCII | def                        | {english_stem} | english_stem | {def}
- entity    | XML entity      | &#xa9;                     | {}             |              | 
- asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem | {ghi}
- entity    | XML entity      | &#245;                     | {}             |              | 
- asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem | {jkl}
- tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |              | 
+   alias   |   description   |           token            |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+----------------------------+----------------+---------------+--------------+---------
+ tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |               |              | 
+ asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem  | english_stem | {abc}
+ entity    | XML entity      | &nm1;                      | {}             |               |              | 
+ asciiword | Word, all ASCII | def                        | {english_stem} | english_stem  | english_stem | {def}
+ entity    | XML entity      | &#xa9;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem  | english_stem | {ghi}
+ entity    | XML entity      | &#245;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem  | english_stem | {jkl}
+ tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |               |              | 
 (9 rows)
 
 -- check parsing of URLs
 SELECT * from ts_debug('english', 'http://www.harewoodsolutions.co.uk/press.aspx</span>');
-  alias   |  description  |                 token                  | dictionaries | dictionary |                 lexemes                  
-----------+---------------+----------------------------------------+--------------+------------+------------------------------------------
- protocol | Protocol head | http://                                | {}           |            | 
- url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple     | {www.harewoodsolutions.co.uk/press.aspx}
- host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple     | {www.harewoodsolutions.co.uk}
- url_path | URL path      | /press.aspx                            | {simple}     | simple     | {/press.aspx}
- tag      | XML tag       | </span>                                | {}           |            | 
+  alias   |  description  |                 token                  | dictionaries | configuration | command |                 lexemes                  
+----------+---------------+----------------------------------------+--------------+---------------+---------+------------------------------------------
+ protocol | Protocol head | http://                                | {}           |               |         | 
+ url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk/press.aspx}
+ host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk}
+ url_path | URL path      | /press.aspx                            | {simple}     | simple        | simple  | {/press.aspx}
+ tag      | XML tag       | </span>                                | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
-  alias   |  description  |           token            | dictionaries | dictionary |           lexemes            
-----------+---------------+----------------------------+--------------+------------+------------------------------
- protocol | Protocol head | http://                    | {}           |            | 
- url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple     | {aew.wer0c.ewr/id?ad=qwe&dw}
- host     | Host          | aew.wer0c.ewr              | {simple}     | simple     | {aew.wer0c.ewr}
- url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple     | {/id?ad=qwe&dw}
- tag      | XML tag       | <span>                     | {}           |            | 
+  alias   |  description  |           token            | dictionaries | configuration | command |           lexemes            
+----------+---------------+----------------------------+--------------+---------------+---------+------------------------------
+ protocol | Protocol head | http://                    | {}           |               |         | 
+ url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple        | simple  | {aew.wer0c.ewr/id?ad=qwe&dw}
+ host     | Host          | aew.wer0c.ewr              | {simple}     | simple        | simple  | {aew.wer0c.ewr}
+ url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple        | simple  | {/id?ad=qwe&dw}
+ tag      | XML tag       | <span>                     | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
-  alias   |  description  |        token         | dictionaries | dictionary |        lexemes         
-----------+---------------+----------------------+--------------+------------+------------------------
- protocol | Protocol head | http://              | {}           |            | 
- url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple     | {5aew.werc.ewr:8100/?}
- host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path      | /?                   | {simple}     | simple     | {/?}
+  alias   |  description  |        token         | dictionaries | configuration | command |        lexemes         
+----------+---------------+----------------------+--------------+---------------+---------+------------------------
+ protocol | Protocol head | http://              | {}           |               |         | 
+ url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?}
+ host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path      | /?                   | {simple}     | simple        | simple  | {/?}
 (4 rows)
 
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
-  alias   | description |         token          | dictionaries | dictionary |         lexemes          
-----------+-------------+------------------------+--------------+------------+--------------------------
- url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple     | {5aew.werc.ewr:8100/?xx}
- host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path    | /?xx                   | {simple}     | simple     | {/?xx}
+  alias   | description |         token          | dictionaries | configuration | command |         lexemes          
+----------+-------------+------------------------+--------------+---------------+---------+--------------------------
+ url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?xx}
+ host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path    | /?xx                   | {simple}     | simple        | simple  | {/?xx}
 (3 rows)
 
 SELECT token, alias,
diff --git a/src/test/regress/sql/oidjoins.sql b/src/test/regress/sql/oidjoins.sql
index fcf9990..320e220 100644
--- a/src/test/regress/sql/oidjoins.sql
+++ b/src/test/regress/sql/oidjoins.sql
@@ -541,10 +541,6 @@ SELECT	ctid, mapcfg
 FROM	pg_catalog.pg_ts_config_map fk
 WHERE	mapcfg != 0 AND
 	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_config pk WHERE pk.oid = fk.mapcfg);
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/sql/tsdicts.sql b/src/test/regress/sql/tsdicts.sql
index a5a569e..3f7df28 100644
--- a/src/test/regress/sql/tsdicts.sql
+++ b/src/test/regress/sql/tsdicts.sql
@@ -117,6 +117,57 @@ CREATE TEXT SEARCH DICTIONARY thesaurus (
 
 SELECT ts_lexize('thesaurus', 'one');
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+
+SELECT to_tsvector('english_union', 'book');
+SELECT to_tsvector('english_union', 'books');
+SELECT to_tsvector('english_union', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+
+SELECT to_tsvector('english_intersect', 'book');
+SELECT to_tsvector('english_intersect', 'books');
+SELECT to_tsvector('english_intersect', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+
+SELECT to_tsvector('english_except', 'book');
+SELECT to_tsvector('english_except', 'books');
+SELECT to_tsvector('english_except', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+
+SELECT to_tsvector('english_branches', 'book');
+SELECT to_tsvector('english_branches', 'books');
+SELECT to_tsvector('english_branches', 'booking');
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -188,3 +239,25 @@ ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR
 SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one');
 SELECT to_tsvector('thesaurus_tst', 'Supernovae star is very new star and usually called supernovae (abbreviation SN)');
 SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a tickets');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two four');
diff --git a/src/test/regress/sql/tsearch.sql b/src/test/regress/sql/tsearch.sql
index 1c8520b..6f8af63 100644
--- a/src/test/regress/sql/tsearch.sql
+++ b/src/test/regress/sql/tsearch.sql
@@ -26,9 +26,9 @@ SELECT oid, cfgname
 FROM pg_ts_config
 WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
+WHERE mapcfg = 0;
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
 SELECT * FROM
#16Aleksander Alekseev
a.alekseev@postgrespro.ru
In reply to: Aleksandr Parfenov (#15)
Re: Flexible configuration for full-text search

The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation: tested, passed

This patch seems to be in a pretty good shape. There is a room for improvement
though.

1. There are no comments for some procedures, their arguments and return
values. Ditto regarding structures and their fields.

2. Please, fix the year in the copyright messages from 2017 to 2018.

3. Somehow I doubt that current amount of tests covers most of the
functionality. Are you sure that if we run lcov, it will not show that most of
the new code is never executed during make installcheck-world?

4. I'm a bit concerned regarding change of the catalog in the
src/include/catalog/indexing.h file. Are you sure it will not break if I
migrate from PostgreSQL 10 to PostgreSQL 11?

5. There are typos, e.g "Last result retued...", "...thesaurus pharse
processing...".

I'm going to run a few more test a bit later. I'll let you know if I'll find
anything.

The new status of this patch is: Waiting on Author

#17Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Aleksander Alekseev (#16)
1 attachment(s)
Re: Flexible configuration for full-text search

Hi Aleksander,

Thank you for the review!

1. There are no comments for some procedures, their arguments and
return values. Ditto regarding structures and their fields.

2. Please, fix the year in the copyright messages from 2017 to 2018.

Both issues are fixed.

3. Somehow I doubt that current amount of tests covers most of the
functionality. Are you sure that if we run lcov, it will not show
that most of the new code is never executed during make
installcheck-world?

I checked it and most of the new code is executed (I mostly checked
ts_parse.c and ts_configmap.c because those files contain most of the
potentially unchecked code). I have added some tests to test output of
the configurations. Also, I have added a test of TEXT CONFIGURATION into
unaccent contrib to test the MAP operator.

4. I'm a bit concerned regarding change of the catalog in the
src/include/catalog/indexing.h file. Are you sure it will not break
if I migrate from PostgreSQL 10 to PostgreSQL 11?

I have tested an upgrade process via pg_upgrade from PostgreSQL 10 and
PostgreSQL 9.6 and have found a bug in a way the pg_dump generates
schema dump for old databases. I fixed it and now pg_upgrade works fine.

The new version of the patch is in an attachment.

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

Attachments:

0001-flexible-fts-configuration-v7.patchtext/x-patchDownload
diff --git a/contrib/unaccent/expected/unaccent.out b/contrib/unaccent/expected/unaccent.out
index b93105e..37b9337 100644
--- a/contrib/unaccent/expected/unaccent.out
+++ b/contrib/unaccent/expected/unaccent.out
@@ -61,3 +61,14 @@ SELECT ts_lexize('unaccent', '
  {����}
 (1 row)
 
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
+         to_tsvector          
+------------------------------
+ 'foobar':1 '�����':2 '���':3
+(1 row)
+
diff --git a/contrib/unaccent/sql/unaccent.sql b/contrib/unaccent/sql/unaccent.sql
index 3102139..6ce21cd 100644
--- a/contrib/unaccent/sql/unaccent.sql
+++ b/contrib/unaccent/sql/unaccent.sql
@@ -2,7 +2,6 @@ CREATE EXTENSION unaccent;
 
 -- must have a UTF8 database
 SELECT getdatabaseencoding();
-
 SET client_encoding TO 'KOI8';
 
 SELECT unaccent('foobar');
@@ -16,3 +15,12 @@ SELECT unaccent('unaccent', '
 SELECT ts_lexize('unaccent', 'foobar');
 SELECT ts_lexize('unaccent', '����');
 SELECT ts_lexize('unaccent', '����');
+
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml
index ebe0b94..ecc3704 100644
--- a/doc/src/sgml/ref/alter_tsconfig.sgml
+++ b/doc/src/sgml/ref/alter_tsconfig.sgml
@@ -22,8 +22,12 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING REPLACE <replaceable class="parameter">old_dictionary</replaceable> WITH <replaceable class="parameter">new_dictionary</replaceable>
@@ -89,6 +93,17 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
    </varlistentry>
 
    <varlistentry>
+    <term><replaceable class="parameter">config</replaceable></term>
+    <listitem>
+     <para>
+      The dictionaries tree expression. The dictionary expression
+      is a triple of condition/command/else that define way to process
+      the text. The <literal>ELSE</literal> part is optional.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry>
     <term><replaceable class="parameter">old_dictionary</replaceable></term>
     <listitem>
      <para>
@@ -133,7 +148,7 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
      </para>
     </listitem>
    </varlistentry>
- </variablelist>
+  </variablelist>
 
   <para>
    The <literal>ADD MAPPING FOR</literal> form installs a list of dictionaries to be
@@ -155,6 +170,53 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
  </refsect1>
 
  <refsect1>
+  <title>Dictionaries Map Configuration</title>
+
+  <refsect2>
+   <title>Format</title>
+   <para>
+    Formally <replaceable class="parameter">config</replaceable> is one of:
+   </para>
+   <programlisting>
+    * dictionary_name
+
+    * config { UNION | INTERSECT | EXCEPT | MAP } config
+
+    * CASE config
+        WHEN [ NO ] MATCH THEN { KEEP | config }
+        [ ELSE config ]
+      END
+   </programlisting>
+  </refsect2>
+
+  <refsect2>
+   <title>Description</title>
+   <para>
+    <replaceable class="parameter">config</replaceable> can be used
+    in three different formats. The most simple format is name of dictionary to
+    use for tokens processing.
+   </para>
+   <para>
+    In order to use more than one dictionary
+    simultaneously user should interconnect dictionaries by operators. Operators
+    <literal>UNION</literal>, <literal>EXCEPT</literal> and
+    <literal>INTERSECT</literal> have same meaning as in operations on sets.
+    Special operator <literal>MAP</literal> gets output of left subexpression
+    and uses it as an input to right subexpression.
+   </para>
+   <para>
+    The third format of <replaceable class="parameter">config</replaceable> is similar to
+    <literal>CASE/WHEN/THEN/ELSE</literal> structure. It's consists of three
+    replaceable parts. First one is configuration which is used to construct lexemes set
+    for matching condition. If the condition is triggered, the command is executed.
+    Use command <literal>KEEP</literal> to avoid repeating of the same
+    configuration in condition and command part. However, command may differ from
+    the condition. The <literal>ELSE</literal> branch is executed otherwise.
+   </para>
+  </refsect2>
+ </refsect1>
+
+ <refsect1>
   <title>Examples</title>
 
   <para>
@@ -167,6 +229,34 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
 ALTER TEXT SEARCH CONFIGURATION my_config
   ALTER MAPPING REPLACE english WITH swedish;
 </programlisting>
+
+  <para>
+   Next example shows how to analyse documents in both English and German languages.
+   <literal>english_hunspell</literal> and <literal>german_hunspell</literal>
+   return result only if a word is recognized. Otherwise, stemmer dictionaries
+   are used to process a token.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH
+   CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+    UNION
+   CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+    In order to combine search for both exact and processed forms the vector
+    should contain lexemes produced by <literal>simple</literal> for exact form
+    of the word as well as lexemes produced by linguistic-aware dictionary
+    (e.g. <literal>english_stem</literal>) for processed forms.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml
index 610b7bf..1253b41 100644
--- a/doc/src/sgml/textsearch.sgml
+++ b/doc/src/sgml/textsearch.sgml
@@ -732,10 +732,11 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     The <function>to_tsvector</function> function internally calls a parser
     which breaks the document text into tokens and assigns a type to
     each token.  For each token, a list of
-    dictionaries (<xref linkend="textsearch-dictionaries"/>) is consulted,
-    where the list can vary depending on the token type.  The first dictionary
-    that <firstterm>recognizes</firstterm> the token emits one or more normalized
-    <firstterm>lexemes</firstterm> to represent the token.  For example,
+    condition/command pairs is consulted, where the list can vary depending
+    on the token type, condition and command are expressions on dictionaries
+    with matching clause in condition(<xref linkend="textsearch-dictionaries"/>).
+    The first command combined with true-resulted condition emits one or more normalized
+    <firstterm>lexemes</firstterm> to represent the token. For example,
     <literal>rats</literal> became <literal>rat</literal> because one of the
     dictionaries recognized that the word <literal>rats</literal> is a plural
     form of <literal>rat</literal>.  Some words are recognized as
@@ -743,7 +744,7 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     causes them to be ignored since they occur too frequently to be useful in
     searching.  In our example these are
     <literal>a</literal>, <literal>on</literal>, and <literal>it</literal>.
-    If no dictionary in the list recognizes the token then it is also ignored.
+    If none of conditions is <literal>true</literal> the token is ignored.
     In this example that happened to the punctuation sign <literal>-</literal>
     because there are in fact no dictionaries assigned for its token type
     (<literal>Space symbols</literal>), meaning space tokens will never be
@@ -2232,8 +2233,8 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
      <para>
       a single lexeme with the <literal>TSL_FILTER</literal> flag set, to replace
       the original token with a new token to be passed to subsequent
-      dictionaries (a dictionary that does this is called a
-      <firstterm>filtering dictionary</firstterm>)
+      dictionaries in a comma-separated syntax (a dictionary that does this
+      is called a <firstterm>filtering dictionary</firstterm>)
      </para>
     </listitem>
     <listitem>
@@ -2265,38 +2266,126 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
    type that the parser can return, a separate list of dictionaries is
    specified by the configuration.  When a token of that type is found
    by the parser, each dictionary in the list is consulted in turn,
-   until some dictionary recognizes it as a known word.  If it is identified
-   as a stop word, or if no dictionary recognizes the token, it will be
-   discarded and not indexed or searched for.
-   Normally, the first dictionary that returns a non-<literal>NULL</literal>
-   output determines the result, and any remaining dictionaries are not
-   consulted; but a filtering dictionary can replace the given word
-   with a modified word, which is then passed to subsequent dictionaries.
+   until command is not selected based on its condition. If none of cases is
+   selected token will be discarded and not indexed or searched for.
   </para>
 
   <para>
-   The general rule for configuring a list of dictionaries
-   is to place first the most narrow, most specific dictionary, then the more
-   general dictionaries, finishing with a very general dictionary, like
+   A tree of cases is described as condition/command/else triples. Each
+   condition is evaluated in order to select appropriate command to generate
+   resulted set of lexemes.
+  </para>
+
+  <para>
+   A condition is an expression with dictionaries used as operands and
+   basic set operators <literal>UNION</literal>, <literal>EXCEPT</literal>, <literal>INTERSECT</literal>
+   and special operator <literal>MAP</literal>.
+   Special operator <literal>MAP</literal> use output of left subexpression as
+   input for right subexpression.
+  </para>
+
+  <para>
+    Rules to write command are same as for condition with additional keyword
+    <literal>KEEP</literal> considered to use the result of the condition as an output.
+  </para>
+
+  <para>
+   A comma-separated list of dictionaries is a simplified variant of text
+   search configuration. Each dictionary consulted to process a token and first
+   non-<literal>NULL</literal> output is accepted as a processing result.
+  </para>
+
+  <para>
+   The general rule for configuring tokens processing
+   is to place first case with the most narrow, most specific dictionary, then the more
+   general dictionaries, finishing with a very general dictionaries, like
    a <application>Snowball</application> stemmer or <literal>simple</literal>, which
-   recognizes everything.  For example, for an astronomy-specific search
+   recognizes everything. For example, for an astronomy-specific search
    (<literal>astro_en</literal> configuration) one could bind token type
    <type>asciiword</type> (ASCII word) to a synonym dictionary of astronomical
    terms, a general English dictionary and a <application>Snowball</application> English
-   stemmer:
+   stemmer in comma-separated variant of mapping:
+  </para>
 
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION astro_en
     ADD MAPPING FOR asciiword WITH astrosyn, english_ispell, english_stem;
 </programlisting>
+
+  <para>
+   Another example is a configuration for both English and German languages via
+   operator-separated variant of mapping:
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION multi_en_de
+    ADD MAPPING FOR asciiword, word WITH
+        CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+         UNION
+        CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+   This configuration provides an ability to search on collection of multilingual
+   documents without specifying language:
+  </para>
+
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'lack');
+ id |                   txt
+----+-----------------------------------------
+  2 | with old stars and lacking gas and dust
+
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'beginnen');
+ id |                 txt
+----+-------------------------------------
+  1 | Das geschah zu Beginn dieses Monats
+</programlisting>
+
+  <para>
+   A combination of stemmer dictionary with <literal>simple</literal> one may be used to mix
+   search for exact form of one word and linguistic search for others.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION exact_and_linguistic
+    ADD MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
+  <para>
+   In the following example a simple dictionary is used to prevent words from normalization in query.
   </para>
 
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Supernova star'),
+                              (2, 'Supernova stars'))
+SELECT * FROM docs WHERE to_tsvector('exact_and_linguistic', txt) @@ (to_tsquery('simple', 'stars') &amp;&amp; to_tsquery('english', 'supernovae'));
+ id |       txt       
+----+-----------------
+  2 | Supernova stars
+</programlisting>
+
+   <caution>
+    <para>
+     Due to lack of information about origin of each lexeme in <literal>tsvector</literal> may
+     lead to false-positive triggers in case of stemmed form being used as exact form in a query.
+    </para>
+   </caution>
+
   <para>
-   A filtering dictionary can be placed anywhere in the list, except at the
-   end where it'd be useless.  Filtering dictionaries are useful to partially
+   Filtering dictionaries are useful to partially
    normalize words to simplify the task of later dictionaries.  For example,
    a filtering dictionary could be used to remove accents from accented
    letters, as is done by the <xref linkend="unaccent"/> module.
+   Filter dictionary should be placed at left of <literal>MAP</literal>
+   operator. If filter dictionary returns <literal>NULL</literal> it pass initial token
+   to the right subexpression.
   </para>
 
   <sect2 id="textsearch-stopwords">
@@ -2463,9 +2552,9 @@ SELECT ts_lexize('public.simple_dict','The');
 
 <screen>
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | Paris | {english_stem} | english_stem  | english_stem | {pari}
 
 CREATE TEXT SEARCH DICTIONARY my_synonym (
     TEMPLATE = synonym,
@@ -2477,9 +2566,12 @@ ALTER TEXT SEARCH CONFIGURATION english
     WITH my_synonym, english_stem;
 
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |       dictionaries        | dictionary | lexemes 
------------+-----------------+-------+---------------------------+------------+---------
- asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris}
+   alias   |   description   | token |       dictionaries        |                configuration                |  command   | lexemes 
+-----------+-----------------+-------+---------------------------+---------------------------------------------+------------+---------
+ asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | CASE my_synonym WHEN MATCH THEN KEEP       +| my_synonym | {paris}
+           |                 |       |                           | ELSE CASE english_stem WHEN MATCH THEN KEEP+|            | 
+           |                 |       |                           | END                                        +|            | 
+           |                 |       |                           | END                                         |            | 
 </screen>
    </para>
 
@@ -3108,6 +3200,21 @@ CREATE TEXT SEARCH DICTIONARY english_ispell (
 ALTER TEXT SEARCH CONFIGURATION pg
     ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
                       word, hword, hword_part
+    WITH 
+      CASE pg_dict WHEN MATCH THEN KEEP
+      ELSE
+          CASE english_ispell WHEN MATCH THEN KEEP
+          ELSE english_stem
+          END
+      END;
+</programlisting>
+
+    Or use alternative comma-separated syntax:
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION pg
+    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
+                      word, hword, hword_part
     WITH pg_dict, english_ispell, english_stem;
 </programlisting>
 
@@ -3183,7 +3290,8 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
          OUT <replaceable class="parameter">description</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">token</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">dictionaries</replaceable> <type>regdictionary[]</type>,
-         OUT <replaceable class="parameter">dictionary</replaceable> <type>regdictionary</type>,
+         OUT <replaceable class="parameter">configuration</replaceable> <type>text</type>,
+         OUT <replaceable class="parameter">command</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)
          returns setof record
 </synopsis>
@@ -3227,14 +3335,20 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionary</replaceable> <type>regdictionary</type> &mdash; the dictionary
-       that recognized the token, or <literal>NULL</literal> if none did
+       <replaceable>configuration</replaceable> <type>text</type> &mdash; the
+       configuration defined for this token type
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       <replaceable>command</replaceable> <type>text</type> &mdash; the command that describes
+       the way the output was produced
       </para>
      </listitem>
      <listitem>
       <para>
        <replaceable>lexemes</replaceable> <type>text[]</type> &mdash; the lexeme(s) produced
-       by the dictionary that recognized the token, or <literal>NULL</literal> if
+       by the command selected according conditions, or <literal>NULL</literal> if
        none did; an empty array (<literal>{}</literal>) means it was recognized as a
        stop word
       </para>
@@ -3247,32 +3361,32 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
 
 <screen>
 SELECT * FROM ts_debug('english','a fat  cat sat on a mat - it ate a fat rats');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | cat   | {english_stem} | english_stem | {cat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | sat   | {english_stem} | english_stem | {sat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | on    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | mat   | {english_stem} | english_stem | {mat}
- blank     | Space symbols   |       | {}             |              | 
- blank     | Space symbols   | -     | {}             |              | 
- asciiword | Word, all ASCII | it    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | ate   | {english_stem} | english_stem | {ate}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | rats  | {english_stem} | english_stem | {rat}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | cat   | {english_stem} | english_stem  | english_stem | {cat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | sat   | {english_stem} | english_stem  | english_stem | {sat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | on    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | mat   | {english_stem} | english_stem  | english_stem | {mat}
+ blank     | Space symbols   |       |                |               |              | 
+ blank     | Space symbols   | -     |                |               |              | 
+ asciiword | Word, all ASCII | it    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | ate   | {english_stem} | english_stem  | english_stem | {ate}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | rats  | {english_stem} | english_stem  | english_stem | {rat}
 </screen>
   </para>
 
@@ -3298,13 +3412,22 @@ ALTER TEXT SEARCH CONFIGURATION public.english
 
 <screen>
 SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
-   alias   |   description   |    token    |         dictionaries          |   dictionary   |   lexemes   
------------+-----------------+-------------+-------------------------------+----------------+-------------
- asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | english_ispell | {}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | english_ispell | {bright}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | english_stem   | {supernova}
+   alias   |   description   |    token    |         dictionaries          |                configuration                |     command      |   lexemes   
+-----------+-----------------+-------------+-------------------------------+---------------------------------------------+------------------+-------------
+ asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {bright}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_stem     | {supernova}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
 </screen>
 
   <para>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 5652e9e..f9fdf4d 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -944,55 +944,14 @@ GRANT SELECT (subdbid, subname, subowner, subenabled, subslotname, subpublicatio
 -- Tsearch debug function.  Defined here because it'd be pretty unwieldy
 -- to put it into pg_proc.h
 
-CREATE FUNCTION ts_debug(IN config regconfig, IN document text,
-    OUT alias text,
-    OUT description text,
-    OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
-    OUT lexemes text[])
-RETURNS SETOF record AS
-$$
-SELECT
-    tt.alias AS alias,
-    tt.description AS description,
-    parse.token AS token,
-    ARRAY ( SELECT m.mapdict::pg_catalog.regdictionary
-            FROM pg_catalog.pg_ts_config_map AS m
-            WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-            ORDER BY m.mapseqno )
-    AS dictionaries,
-    ( SELECT mapdict::pg_catalog.regdictionary
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS dictionary,
-    ( SELECT pg_catalog.ts_lexize(mapdict, parse.token)
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS lexemes
-FROM pg_catalog.ts_parse(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 ), $2
-    ) AS parse,
-     pg_catalog.ts_token_type(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 )
-    ) AS tt
-WHERE tt.tokid = parse.tokid
-$$
-LANGUAGE SQL STRICT STABLE PARALLEL SAFE;
-
-COMMENT ON FUNCTION ts_debug(regconfig,text) IS
-    'debug function for text search configuration';
 
 CREATE FUNCTION ts_debug(IN document text,
     OUT alias text,
     OUT description text,
     OUT token text,
     OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
+    OUT configuration text,
+    OUT command text,
     OUT lexemes text[])
 RETURNS SETOF record AS
 $$
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index 3a84351..53ee576 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -39,9 +39,12 @@
 #include "nodes/makefuncs.h"
 #include "parser/parse_func.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_public.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/jsonb.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "utils/syscache.h"
@@ -935,11 +938,22 @@ makeConfigurationDependencies(HeapTuple tuple, bool removeOld,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			TSMapElement *mapdicts = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			Oid		   *dictionaryOids = TSMapGetDictionaries(mapdicts);
+			Oid		   *currentOid = dictionaryOids;
 
-			referenced.classId = TSDictionaryRelationId;
-			referenced.objectId = cfgmap->mapdict;
-			referenced.objectSubId = 0;
-			add_exact_object_address(&referenced, addrs);
+			while (*currentOid != InvalidOid)
+			{
+				referenced.classId = TSDictionaryRelationId;
+				referenced.objectId = *currentOid;
+				referenced.objectSubId = 0;
+				add_exact_object_address(&referenced, addrs);
+
+				currentOid++;
+			}
+
+			pfree(dictionaryOids);
+			TSMapElementFree(mapdicts);
 		}
 
 		systable_endscan(scan);
@@ -1091,8 +1105,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			mapvalues[Anum_pg_ts_config_map_mapcfg - 1] = cfgOid;
 			mapvalues[Anum_pg_ts_config_map_maptokentype - 1] = cfgmap->maptokentype;
-			mapvalues[Anum_pg_ts_config_map_mapseqno - 1] = cfgmap->mapseqno;
-			mapvalues[Anum_pg_ts_config_map_mapdict - 1] = cfgmap->mapdict;
+			mapvalues[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(&cfgmap->mapdicts);
 
 			newmaptup = heap_form_tuple(mapRel->rd_att, mapvalues, mapnulls);
 
@@ -1195,7 +1208,7 @@ AlterTSConfiguration(AlterTSConfigurationStmt *stmt)
 	relMap = heap_open(TSConfigMapRelationId, RowExclusiveLock);
 
 	/* Add or drop mappings */
-	if (stmt->dicts)
+	if (stmt->dicts || stmt->dict_map)
 		MakeConfigurationMapping(stmt, tup, relMap);
 	else if (stmt->tokentype)
 		DropConfigurationMapping(stmt, tup, relMap);
@@ -1271,6 +1284,59 @@ getTokenTypes(Oid prsId, List *tokennames)
 }
 
 /*
+ * Parse parse node extracted from dictionary mapping and transform it into
+ * internal representation of dictionary mapping.
+ */
+static TSMapElement *
+ParseTSMapConfig(DictMapElem *elem)
+{
+	TSMapElement *result = palloc0(sizeof(TSMapElement));
+
+	if (elem->kind == DICT_MAP_CASE)
+	{
+		TSMapCase  *caseObject = palloc0(sizeof(TSMapCase));
+		DictMapCase *caseASTObject = elem->data;
+
+		caseObject->condition = ParseTSMapConfig(caseASTObject->condition);
+		caseObject->command = ParseTSMapConfig(caseASTObject->command);
+
+		if (caseASTObject->elsebranch)
+			caseObject->elsebranch = ParseTSMapConfig(caseASTObject->elsebranch);
+
+		caseObject->match = caseASTObject->match;
+
+		caseObject->condition->parent = result;
+		caseObject->command->parent = result;
+
+		result->type = TSMAP_CASE;
+		result->value.objectCase = caseObject;
+	}
+	else if (elem->kind == DICT_MAP_EXPRESSION)
+	{
+		TSMapExpression *expression = palloc0(sizeof(TSMapExpression));
+		DictMapExprElem *expressionAST = elem->data;
+
+		expression->left = ParseTSMapConfig(expressionAST->left);
+		expression->right = ParseTSMapConfig(expressionAST->right);
+		expression->operator = expressionAST->oper;
+
+		result->type = TSMAP_EXPRESSION;
+		result->value.objectExpression = expression;
+	}
+	else if (elem->kind == DICT_MAP_KEEP)
+	{
+		result->value.objectExpression = NULL;
+		result->type = TSMAP_KEEP;
+	}
+	else if (elem->kind == DICT_MAP_DICTIONARY)
+	{
+		result->value.objectDictionary = get_ts_dict_oid(elem->data, false);
+		result->type = TSMAP_DICTIONARY;
+	}
+	return result;
+}
+
+/*
  * ALTER TEXT SEARCH CONFIGURATION ADD/ALTER MAPPING
  */
 static void
@@ -1286,8 +1352,9 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	Oid			prsId;
 	int		   *tokens,
 				ntoken;
-	Oid		   *dictIds;
-	int			ndict;
+	Oid		   *dictIds = NULL;
+	int			ndict = 0;
+	TSMapElement *config = NULL;
 	ListCell   *c;
 
 	prsId = ((Form_pg_ts_config) GETSTRUCT(tup))->cfgparser;
@@ -1326,15 +1393,18 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	/*
 	 * Convert list of dictionary names to array of dict OIDs
 	 */
-	ndict = list_length(stmt->dicts);
-	dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
-	i = 0;
-	foreach(c, stmt->dicts)
+	if (stmt->dicts)
 	{
-		List	   *names = (List *) lfirst(c);
+		ndict = list_length(stmt->dicts);
+		dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
+		i = 0;
+		foreach(c, stmt->dicts)
+		{
+			List	   *names = (List *) lfirst(c);
 
-		dictIds[i] = get_ts_dict_oid(names, false);
-		i++;
+			dictIds[i] = get_ts_dict_oid(names, false);
+			i++;
+		}
 	}
 
 	if (stmt->replace)
@@ -1356,6 +1426,10 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			Datum		repl_val[Natts_pg_ts_config_map];
+			bool		repl_null[Natts_pg_ts_config_map];
+			bool		repl_repl[Natts_pg_ts_config_map];
+			HeapTuple	newtup;
 
 			/*
 			 * check if it's one of target token types
@@ -1379,25 +1453,21 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 			/*
 			 * replace dictionary if match
 			 */
-			if (cfgmap->mapdict == dictOld)
-			{
-				Datum		repl_val[Natts_pg_ts_config_map];
-				bool		repl_null[Natts_pg_ts_config_map];
-				bool		repl_repl[Natts_pg_ts_config_map];
-				HeapTuple	newtup;
-
-				memset(repl_val, 0, sizeof(repl_val));
-				memset(repl_null, false, sizeof(repl_null));
-				memset(repl_repl, false, sizeof(repl_repl));
-
-				repl_val[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictNew);
-				repl_repl[Anum_pg_ts_config_map_mapdict - 1] = true;
-
-				newtup = heap_modify_tuple(maptup,
-										   RelationGetDescr(relMap),
-										   repl_val, repl_null, repl_repl);
-				CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
-			}
+			config = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			TSMapReplaceDictionary(config, dictOld, dictNew);
+
+			memset(repl_val, 0, sizeof(repl_val));
+			memset(repl_null, false, sizeof(repl_null));
+			memset(repl_repl, false, sizeof(repl_repl));
+
+			repl_val[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
+			repl_repl[Anum_pg_ts_config_map_mapdicts - 1] = true;
+
+			newtup = heap_modify_tuple(maptup,
+									   RelationGetDescr(relMap),
+									   repl_val, repl_null, repl_repl);
+			CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
+			pfree(config);
 		}
 
 		systable_endscan(scan);
@@ -1407,24 +1477,22 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		/*
 		 * Insertion of new entries
 		 */
+		config = ParseTSMapConfig(stmt->dict_map);
+
 		for (i = 0; i < ntoken; i++)
 		{
-			for (j = 0; j < ndict; j++)
-			{
-				Datum		values[Natts_pg_ts_config_map];
-				bool		nulls[Natts_pg_ts_config_map];
+			Datum		values[Natts_pg_ts_config_map];
+			bool		nulls[Natts_pg_ts_config_map];
 
-				memset(nulls, false, sizeof(nulls));
-				values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
-				values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
-				values[Anum_pg_ts_config_map_mapseqno - 1] = Int32GetDatum(j + 1);
-				values[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictIds[j]);
+			memset(nulls, false, sizeof(nulls));
+			values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
+			values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
+			values[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
 
-				tup = heap_form_tuple(relMap->rd_att, values, nulls);
-				CatalogTupleInsert(relMap, tup);
+			tup = heap_form_tuple(relMap->rd_att, values, nulls);
+			CatalogTupleInsert(relMap, tup);
 
-				heap_freetuple(tup);
-			}
+			heap_freetuple(tup);
 		}
 	}
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index fd3001c..3e2385f 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4389,6 +4389,42 @@ _copyReassignOwnedStmt(const ReassignOwnedStmt *from)
 	return newnode;
 }
 
+static DictMapElem *
+_copyDictMapElem(const DictMapElem *from)
+{
+	DictMapElem *newnode = makeNode(DictMapElem);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(data);
+
+	return newnode;
+}
+
+static DictMapExprElem *
+_copyDictMapExprElem(const DictMapExprElem *from)
+{
+	DictMapExprElem *newnode = makeNode(DictMapExprElem);
+
+	COPY_NODE_FIELD(left);
+	COPY_NODE_FIELD(right);
+	COPY_SCALAR_FIELD(oper);
+
+	return newnode;
+}
+
+static DictMapCase *
+_copyDictMapCase(const DictMapCase *from)
+{
+	DictMapCase *newnode = makeNode(DictMapCase);
+
+	COPY_NODE_FIELD(condition);
+	COPY_NODE_FIELD(command);
+	COPY_NODE_FIELD(elsebranch);
+	COPY_SCALAR_FIELD(match);
+
+	return newnode;
+}
+
 static AlterTSDictionaryStmt *
 _copyAlterTSDictionaryStmt(const AlterTSDictionaryStmt *from)
 {
@@ -5396,6 +5432,15 @@ copyObjectImpl(const void *from)
 		case T_ReassignOwnedStmt:
 			retval = _copyReassignOwnedStmt(from);
 			break;
+		case T_DictMapExprElem:
+			retval = _copyDictMapExprElem(from);
+			break;
+		case T_DictMapElem:
+			retval = _copyDictMapElem(from);
+			break;
+		case T_DictMapCase:
+			retval = _copyDictMapCase(from);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _copyAlterTSDictionaryStmt(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 7d2aa1a..c277478 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2188,6 +2188,36 @@ _equalReassignOwnedStmt(const ReassignOwnedStmt *a, const ReassignOwnedStmt *b)
 }
 
 static bool
+_equalDictMapElem(const DictMapElem *a, const DictMapElem *b)
+{
+	COMPARE_NODE_FIELD(data);
+	COMPARE_SCALAR_FIELD(kind);
+
+	return true;
+}
+
+static bool
+_equalDictMapExprElem(const DictMapExprElem *a, const DictMapExprElem *b)
+{
+	COMPARE_NODE_FIELD(left);
+	COMPARE_NODE_FIELD(right);
+	COMPARE_SCALAR_FIELD(oper);
+
+	return true;
+}
+
+static bool
+_equalDictMapCase(const DictMapCase *a, const DictMapCase *b)
+{
+	COMPARE_NODE_FIELD(condition);
+	COMPARE_NODE_FIELD(command);
+	COMPARE_NODE_FIELD(elsebranch);
+	COMPARE_SCALAR_FIELD(match);
+
+	return true;
+}
+
+static bool
 _equalAlterTSDictionaryStmt(const AlterTSDictionaryStmt *a, const AlterTSDictionaryStmt *b)
 {
 	COMPARE_NODE_FIELD(dictname);
@@ -3533,6 +3563,15 @@ equal(const void *a, const void *b)
 		case T_ReassignOwnedStmt:
 			retval = _equalReassignOwnedStmt(a, b);
 			break;
+		case T_DictMapExprElem:
+			retval = _equalDictMapExprElem(a, b);
+			break;
+		case T_DictMapElem:
+			retval = _equalDictMapElem(a, b);
+			break;
+		case T_DictMapCase:
+			retval = _equalDictMapCase(a, b);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _equalAlterTSDictionaryStmt(a, b);
 			break;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 5329432..e2b2b4a 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -52,6 +52,7 @@
 #include "catalog/namespace.h"
 #include "catalog/pg_am.h"
 #include "catalog/pg_trigger.h"
+#include "catalog/pg_ts_config_map.h"
 #include "commands/defrem.h"
 #include "commands/trigger.h"
 #include "nodes/makefuncs.h"
@@ -241,6 +242,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionSpec		*partspec;
 	PartitionBoundSpec	*partboundspec;
 	RoleSpec			*rolespec;
+	DictMapElem			*dmapelem;
 }
 
 %type <node>	stmt schema_stmt
@@ -308,7 +310,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <ival>	vacuum_option_list vacuum_option_elem
 %type <boolean>	opt_or_replace
 				opt_grant_grant_option opt_grant_admin_option
-				opt_nowait opt_if_exists opt_with_data
+				opt_nowait opt_if_exists opt_with_data opt_dictionary_map_no
 %type <ival>	opt_nowait_or_skip
 
 %type <list>	OptRoleList AlterOptRoleList
@@ -582,6 +584,13 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>		hash_partbound partbound_datum_list range_datum_list
 %type <defelt>		hash_partbound_elem
 
+%type <ival>		dictionary_map_set_expr_operator
+%type <dmapelem>	dictionary_map_dict dictionary_map_command_expr_paren
+					dictionary_map_set_expr dictionary_map_case
+					dictionary_map_action dictionary_map
+					opt_dictionary_map_case_else dictionary_config
+					dictionary_config_comma
+
 /*
  * Non-keyword token types.  These are hard-wired into the "flex" lexer.
  * They must be listed first so that their numeric codes do not depend on
@@ -643,13 +652,14 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	JOIN
 
-	KEY
+	KEEP KEY
 
 	LABEL LANGUAGE LARGE_P LAST_P LATERAL_P
 	LEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL
 	LOCALTIME LOCALTIMESTAMP LOCATION LOCK_P LOCKED LOGGED
 
-	MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE MONTH_P MOVE
+	MAP MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE
+	MONTH_P MOVE
 
 	NAME_P NAMES NATIONAL NATURAL NCHAR NEW NEXT NO NONE
 	NOT NOTHING NOTIFY NOTNULL NOWAIT NULL_P NULLIF
@@ -10345,24 +10355,26 @@ AlterTSDictionaryStmt:
 		;
 
 AlterTSConfigurationStmt:
-			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with any_name_list
+			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ADD_MAPPING;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = false;
 					n->replace = false;
 					$$ = (Node*)n;
 				}
-			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with any_name_list
+			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ALTER_MAPPING_FOR_TOKEN;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = true;
 					n->replace = false;
 					$$ = (Node*)n;
@@ -10414,6 +10426,134 @@ any_with:	WITH									{}
 			| WITH_LA								{}
 		;
 
+opt_dictionary_map_no:
+			NO { $$ = true; }
+			| { $$ = false; }
+		;
+
+dictionary_config_comma:
+			dictionary_map_dict { $$ = $1; }
+			| dictionary_map_dict ',' dictionary_config_comma
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = TSMAP_OP_COMMA;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_config:
+			dictionary_map { $$ = $1; }
+			| dictionary_map_dict ',' dictionary_config_comma
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = TSMAP_OP_COMMA;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map:
+			dictionary_map_case { $$ = $1; }
+			| dictionary_map_set_expr { $$ = $1; }
+		;
+
+dictionary_map_action:
+			KEEP
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_KEEP;
+				n->data = NULL;
+				$$ = n;
+			}
+			| dictionary_map { $$ = $1; }
+		;
+
+opt_dictionary_map_case_else:
+			ELSE dictionary_map { $$ = $2; }
+			| { $$ = NULL; }
+		;
+
+dictionary_map_case:
+			CASE dictionary_map WHEN opt_dictionary_map_no MATCH THEN dictionary_map_action opt_dictionary_map_case_else END_P
+			{
+				DictMapCase *n = makeNode(DictMapCase);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->condition = $2;
+				n->command = $7;
+				n->elsebranch = $8;
+				n->match = !$4;
+
+				r->kind = DICT_MAP_CASE;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_set_expr_operator:
+			UNION { $$ = TSMAP_OP_UNION; }
+			| EXCEPT { $$ = TSMAP_OP_EXCEPT; }
+			| INTERSECT { $$ = TSMAP_OP_INTERSECT; }
+			| MAP { $$ = TSMAP_OP_MAP; }
+		;
+
+dictionary_map_set_expr:
+			dictionary_map_command_expr_paren { $$ = $1; }
+			| dictionary_map_case dictionary_map_set_expr_operator dictionary_map_case
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = $2;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+			| dictionary_map_command_expr_paren dictionary_map_set_expr_operator dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = $2;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_command_expr_paren:
+			'(' dictionary_map_set_expr ')'	{ $$ = $2; }
+			| dictionary_map_dict			{ $$ = $1; }
+		;
+
+dictionary_map_dict:
+			any_name
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_DICTIONARY;
+				n->data = $1;
+				$$ = n;
+			}
+		;
 
 /*****************************************************************************
  *
@@ -15064,6 +15204,7 @@ unreserved_keyword:
 			| LOCK_P
 			| LOCKED
 			| LOGGED
+			| MAP
 			| MAPPING
 			| MATCH
 			| MATERIALIZED
@@ -15368,6 +15509,7 @@ reserved_keyword:
 			| INITIALLY
 			| INTERSECT
 			| INTO
+			| KEEP
 			| LATERAL_P
 			| LEADING
 			| LIMIT
diff --git a/src/backend/tsearch/Makefile b/src/backend/tsearch/Makefile
index 227468a..e61ad4f 100644
--- a/src/backend/tsearch/Makefile
+++ b/src/backend/tsearch/Makefile
@@ -26,7 +26,7 @@ DICTFILES_PATH=$(addprefix dicts/,$(DICTFILES))
 OBJS = ts_locale.o ts_parse.o wparser.o wparser_def.o dict.o \
 	dict_simple.o dict_synonym.o dict_thesaurus.o \
 	dict_ispell.o regis.o spell.o \
-	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o
+	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o ts_configmap.o
 
 include $(top_srcdir)/src/backend/common.mk
 
diff --git a/src/backend/tsearch/ts_configmap.c b/src/backend/tsearch/ts_configmap.c
new file mode 100644
index 0000000..2b9d718
--- /dev/null
+++ b/src/backend/tsearch/ts_configmap.c
@@ -0,0 +1,1054 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.c
+ *		internal representation of text search configuration and utilities for it
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/tsearch/ts_confimap.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include <ctype.h>
+
+#include "access/heapam.h"
+#include "access/genam.h"
+#include "access/htup_details.h"
+#include "access/sysattr.h"
+#include "catalog/indexing.h"
+#include "catalog/pg_ts_dict.h"
+#include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "utils/fmgroids.h"
+
+/*
+ * Size selected arbitrary, based on assumption that 1024 frames of stack
+ * is enough for parsing of configurations
+ */
+#define JSONB_PARSE_STATE_STACK_SIZE 1024
+
+/*
+ * Used during the parsing of TSMapElement from JSONB into internal
+ * data structures.
+ */
+typedef enum TSMapParseState
+{
+	TSMPS_WAIT_ELEMENT,
+	TSMPS_READ_DICT_OID,
+	TSMPS_READ_COMPLEX_OBJ,
+	TSMPS_READ_EXPRESSION,
+	TSMPS_READ_CASE,
+	TSMPS_READ_OPERATOR,
+	TSMPS_READ_COMMAND,
+	TSMPS_READ_CONDITION,
+	TSMPS_READ_ELSEBRANCH,
+	TSMPS_READ_MATCH,
+	TSMPS_READ_KEEP,
+	TSMPS_READ_LEFT,
+	TSMPS_READ_RIGHT
+} TSMapParseState;
+
+/*
+ * Context used during JSONB parsing to construct a TSMap
+ */
+typedef struct TSMapJsonbParseData
+{
+	TSMapParseState states[JSONB_PARSE_STATE_STACK_SIZE];	/* Stack of states of
+															 * JSONB parsing
+															 * automaton */
+	int			statesIndex;	/* Index of current stack frame */
+	TSMapElement *element;		/* Element that is in construction now */
+} TSMapJsonbParseData;
+
+static JsonbValue *TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState);
+static TSMapElement * JsonbToTSMapElement(JsonbContainer *root);
+
+/*
+ * Print name of the dictionary into StringInfo variable result
+ */
+void
+TSMapPrintDictName(Oid dictId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_ts_dict dict;
+
+	if (false)
+		return;
+
+	maprel = heap_open(TSDictionaryRelationId, AccessShareLock);
+	mapidx = index_open(TSDictionaryOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(dictId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	dict = (Form_pg_ts_dict) GETSTRUCT(maptup);
+	appendStringInfoString(result, dict->dictname.data);
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print the expression into StringInfo variable result
+ */
+static void
+TSMapPrintExpression(TSMapExpression *expression, StringInfo result)
+{
+
+	if (expression->left)
+		TSMapPrintElement(expression->left, result);
+
+	switch (expression->operator)
+	{
+		case TSMAP_OP_UNION:
+			appendStringInfoString(result, " UNION ");
+			break;
+		case TSMAP_OP_EXCEPT:
+			appendStringInfoString(result, " EXCEPT ");
+			break;
+		case TSMAP_OP_INTERSECT:
+			appendStringInfoString(result, " INTERSECT ");
+			break;
+		case TSMAP_OP_COMMA:
+			appendStringInfoString(result, ", ");
+			break;
+		case TSMAP_OP_MAP:
+			appendStringInfoString(result, " MAP ");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains invalid expression operator.")));
+			break;
+	}
+
+	if (expression->right)
+		TSMapPrintElement(expression->right, result);
+}
+
+/*
+ * Print the case configuration construction into StringInfo variable result
+ */
+static void
+TSMapPrintCase(TSMapCase *caseObject, StringInfo result)
+{
+	appendStringInfoString(result, "CASE ");
+
+	TSMapPrintElement(caseObject->condition, result);
+
+	appendStringInfoString(result, " WHEN ");
+	if (!caseObject->match)
+		appendStringInfoString(result, "NO ");
+	appendStringInfoString(result, "MATCH THEN ");
+
+	TSMapPrintElement(caseObject->command, result);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		appendStringInfoString(result, "\nELSE ");
+		TSMapPrintElement(caseObject->elsebranch, result);
+	}
+	appendStringInfoString(result, "\nEND");
+}
+
+/*
+ * Print the element into StringInfo result.
+ * Uses other function and serves for element type detection.
+ */
+void
+TSMapPrintElement(TSMapElement *element, StringInfo result)
+{
+	switch (element->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapPrintExpression(element->value.objectExpression, result);
+			break;
+		case TSMAP_DICTIONARY:
+			TSMapPrintDictName(element->value.objectDictionary, result);
+			break;
+		case TSMAP_CASE:
+			TSMapPrintCase(element->value.objectCase, result);
+			break;
+		case TSMAP_KEEP:
+			appendStringInfoString(result, "KEEP");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains elements with invalid type.")));
+			break;
+	}
+}
+
+/*
+ * Print the text search configuration as a text.
+ */
+Datum
+dictionary_mapping_to_text(PG_FUNCTION_ARGS)
+{
+	Oid			cfgOid = PG_GETARG_OID(0);
+	int32		tokentype = PG_GETARG_INT32(1);
+	StringInfo	rawResult;
+	text	   *result = NULL;
+	TSConfigCacheEntry *cacheEntry;
+
+	cacheEntry = lookup_ts_config_cache(cfgOid);
+	rawResult = makeStringInfo();
+	initStringInfo(rawResult);
+
+	if (cacheEntry->lenmap > tokentype && cacheEntry->map[tokentype] != NULL)
+	{
+		TSMapElement *element = cacheEntry->map[tokentype];
+
+		TSMapPrintElement(element, rawResult);
+	}
+
+	result = cstring_to_text(rawResult->data);
+	pfree(rawResult);
+	PG_RETURN_TEXT_P(result);
+}
+
+/* ----------------
+ * Functions used to convert TSMap structure into JSONB representation
+ * ----------------
+ */
+
+/*
+ * Convert an integer value into JsonbValue
+ */
+static JsonbValue *
+IntToJsonbValue(int intValue)
+{
+	char		buffer[16];
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	/*
+	 * String size is based on limit of int capacity up to 12 chars with sign
+	 * and NULL-character
+	 */
+	memset(buffer, 0, sizeof(char) * 12);
+
+	pg_ltoa(intValue, buffer);
+	value->type = jbvNumeric;
+	value->val.numeric = DatumGetNumeric(DirectFunctionCall3(numeric_in,
+															 CStringGetDatum(buffer),
+															 ObjectIdGetDatum(InvalidOid),
+															 Int32GetDatum(-1)
+															 ));
+	return value;
+}
+
+/*
+ * Convert a FTS configuration expression into JsonbValue
+ */
+static JsonbValue *
+TSMapExpressionToJsonbValue(TSMapExpression *expression, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("operator");
+	key.val.string.val = "operator";
+	value = IntToJsonbValue(expression->operator);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("left");
+	key.val.string.val = "left";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->left, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("right");
+	key.val.string.val = "right";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->right, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS configuration case into JsonbValue
+ */
+static JsonbValue *
+TSMapCaseToJsonbValue(TSMapCase *caseObject, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("condition");
+	key.val.string.val = "condition";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->condition, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("command");
+	key.val.string.val = "command";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->command, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		key.type = jbvString;
+		key.val.string.len = strlen("elsebranch");
+		key.val.string.val = "elsebranch";
+
+		pushJsonbValue(&jsonbState, WJB_KEY, &key);
+		value = TSMapElementToJsonbValue(caseObject->elsebranch, jsonbState);
+
+		if (value && IsAJsonbScalar(value))
+			pushJsonbValue(&jsonbState, WJB_VALUE, value);
+	}
+
+	key.type = jbvString;
+	key.val.string.len = strlen("match");
+	key.val.string.val = "match";
+
+	value = IntToJsonbValue(caseObject->match ? 1 : 0);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS KEEP command into JsonbValue
+ */
+static JsonbValue *
+TSMapKeepToJsonbValue(JsonbParseState *jsonbState)
+{
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	value->type = jbvString;
+	value->val.string.len = strlen("keep");
+	value->val.string.val = "keep";
+
+	return pushJsonbValue(&jsonbState, WJB_VALUE, value);
+}
+
+/*
+ * Convert a FTS element into JsonbValue. Common point for all types of TSMapElement
+ */
+JsonbValue *
+TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState)
+{
+	JsonbValue *result = NULL;
+
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_EXPRESSION:
+				result = TSMapExpressionToJsonbValue(element->value.objectExpression, jsonbState);
+				break;
+			case TSMAP_DICTIONARY:
+				result = IntToJsonbValue(element->value.objectDictionary);
+				break;
+			case TSMAP_CASE:
+				result = TSMapCaseToJsonbValue(element->value.objectCase, jsonbState);
+				break;
+			case TSMAP_KEEP:
+				result = TSMapKeepToJsonbValue(jsonbState);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Required text search configuration contains elements with invalid type.")));
+				break;
+		}
+	}
+	return result;
+}
+
+/*
+ * Convert a FTS configuration into JSONB
+ */
+Jsonb *
+TSMapToJsonb(TSMapElement *element)
+{
+	JsonbParseState *jsonbState = NULL;
+	JsonbValue *out;
+	Jsonb	   *result;
+
+	out = TSMapElementToJsonbValue(element, jsonbState);
+
+	result = JsonbValueToJsonb(out);
+	return result;
+}
+
+/* ----------------
+ * Functions used to get TSMap structure from JSONB representation
+ * ----------------
+ */
+
+/*
+ * Extract an integer from JsonbValue
+ */
+static int
+JsonbValueToInt(JsonbValue *value)
+{
+	char	   *str;
+
+	str = DatumGetCString(DirectFunctionCall1(numeric_out, NumericGetDatum(value->val.numeric)));
+	return pg_atoi(str, sizeof(int), 0);
+}
+
+/*
+ * Check is a key one of FTS configuration case fields
+ */
+static bool
+IsTSMapCaseKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "match") == 0 || strcmp(key, "condition") == 0 || strcmp(key, "command") == 0 || strcmp(key, "elsebranch") == 0;
+}
+
+/*
+ * Check is a key one of FTS configuration expression fields
+ */
+static bool
+IsTSMapExpressionKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "operator") == 0 || strcmp(key, "left") == 0 || strcmp(key, "right") == 0;
+}
+
+/*
+ * Configure parseData->element according to value (key)
+ */
+static void
+JsonbBeginObjectKey(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *parentElement = parseData->element;
+
+	parseData->element = palloc0(sizeof(TSMapElement));
+	parseData->element->parent = parentElement;
+
+	/* Overwrite object-type state based on key */
+	if (IsTSMapExpressionKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_EXPRESSION;
+		parseData->element->type = TSMAP_EXPRESSION;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapExpression));
+	}
+	else if (IsTSMapCaseKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CASE;
+		parseData->element->type = TSMAP_CASE;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapCase));
+	}
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration expression
+ */
+static void
+JsonbKeyExpressionProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "operator") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_OPERATOR;
+	else if (strcmp(key, "left") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_LEFT;
+	else if (strcmp(key, "right") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_RIGHT;
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration case
+ */
+static void
+JsonbKeyCaseProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "condition") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CONDITION;
+	else if (strcmp(key, "command") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_COMMAND;
+	else if (strcmp(key, "elsebranch") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_ELSEBRANCH;
+	else if (strcmp(key, "match") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_MATCH;
+}
+
+/*
+ * Convert a JsonbValue into OID TSMapElement
+ */
+static TSMapElement *
+JsonbValueToOidElement(JsonbValue *value, TSMapElement *parent)
+{
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	element->type = TSMAP_DICTIONARY;
+	element->value.objectDictionary = JsonbValueToInt(value);
+	return element;
+}
+
+/*
+ * Convert a JsonbValue into string TSMapElement.
+ * Used for special values such as KEEP command
+ */
+static TSMapElement *
+JsonbValueReadString(JsonbValue *value, TSMapElement *parent)
+{
+	char	   *str;
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	str = palloc0(sizeof(char) * (value->val.string.len + 1));
+	memcpy(str, value->val.string.val, sizeof(char) * value->val.string.len);
+
+	if (strcmp(str, "keep") == 0)
+		element->type = TSMAP_KEEP;
+
+	pfree(str);
+
+	return element;
+}
+
+/*
+ * Process a JsonbValue object
+ */
+static void
+JsonbProcessElement(JsonbIteratorToken r, JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *element = NULL;
+
+	switch (r)
+	{
+		case WJB_KEY:
+
+			/*
+			 * Construct an TSMapElement object. At first key inside JSONB
+			 * object a type is selected based on key.
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMPLEX_OBJ)
+				JsonbBeginObjectKey(value, parseData);
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_EXPRESSION)
+				JsonbKeyExpressionProcessing(value, parseData);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CASE)
+				JsonbKeyCaseProcessing(value, parseData);
+
+			break;
+		case WJB_BEGIN_OBJECT:
+
+			/*
+			 * Begin construction of new object
+			 */
+			parseData->statesIndex++;
+			parseData->states[parseData->statesIndex] = TSMPS_READ_COMPLEX_OBJ;
+			break;
+		case WJB_END_OBJECT:
+
+			/*
+			 * Save constructed object based on current state of parser
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->parent->value.objectExpression->left = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->parent->value.objectExpression->right = parseData->element;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->parent->value.objectCase->condition = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->parent->value.objectCase->command = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->parent->value.objectCase->elsebranch = parseData->element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_VALUE:
+
+			/*
+			 * Save a value inside constructing object
+			 */
+			if (value.type == jbvBinary)
+				element = JsonbToTSMapElement(value.val.binary.data);
+			else if (value.type == jbvString)
+				element = JsonbValueReadString(&value, parseData->element);
+			else if (value.type == jbvNumeric)
+				element = JsonbValueToOidElement(&value, parseData->element);
+			else
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains object with invalid type.")));
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->value.objectCase->condition = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->value.objectCase->command = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->value.objectCase->elsebranch = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_MATCH)
+				parseData->element->value.objectCase->match = JsonbValueToInt(&value) == 1 ? true : false;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_OPERATOR)
+				parseData->element->value.objectExpression->operator = JsonbValueToInt(&value);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->value.objectExpression->left = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->value.objectExpression->right = element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_ELEM:
+
+			/*
+			 * Store a simple element such as dictionary OID
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_WAIT_ELEMENT)
+			{
+				if (parseData->element != NULL)
+					parseData->element = JsonbValueToOidElement(&value, parseData->element->parent);
+				else
+					parseData->element = JsonbValueToOidElement(&value, NULL);
+			}
+			break;
+		default:
+			/* Ignore unused JSONB tokens */
+			break;
+	}
+}
+
+/*
+ * Convert a JsonbContainer into TSMapElement
+ */
+static TSMapElement *
+JsonbToTSMapElement(JsonbContainer *root)
+{
+	TSMapJsonbParseData parseData;
+	JsonbIteratorToken r;
+	JsonbIterator *it;
+	JsonbValue	val;
+
+	parseData.statesIndex = 0;
+	parseData.states[parseData.statesIndex] = TSMPS_WAIT_ELEMENT;
+	parseData.element = NULL;
+
+	it = JsonbIteratorInit(root);
+
+	while ((r = JsonbIteratorNext(&it, &val, true)) != WJB_DONE)
+		JsonbProcessElement(r, val, &parseData);
+
+	return parseData.element;
+}
+
+/*
+ * Convert a JSONB into TSMapElement
+ */
+TSMapElement *
+JsonbToTSMap(Jsonb *json)
+{
+	JsonbContainer *root = &json->root;
+
+	return JsonbToTSMapElement(root);
+}
+
+/* ----------------
+ * Text Search Configuration Map Utils
+ * ----------------
+ */
+
+/*
+ * Dynamically extendable list of OIDs
+ */
+typedef struct OidList
+{
+	Oid		   *data;
+	int			size;			/* Size of data array. Uninitialized elements
+								 * in data filled with InvalidOid */
+} OidList;
+
+/*
+ * Initialize a list
+ */
+static OidList *
+OidListInit()
+{
+	OidList    *result = palloc0(sizeof(OidList));
+
+	result->size = 1;
+	result->data = palloc0(result->size * sizeof(Oid));
+	result->data[0] = InvalidOid;
+	return result;
+}
+
+/*
+ * Add a new OID into list. If it is already stored in list, it won't be add second time.
+ */
+static void
+OidListAdd(OidList *list, Oid oid)
+{
+	int			i;
+
+	/* Search for the Oid in the list */
+	for (i = 0; list->data[i] != InvalidOid; i++)
+		if (list->data[i] == oid)
+			return;
+
+	/* If not found, insert it in the end of the list */
+	if (i >= list->size - 1)
+	{
+		int			j;
+
+		list->size = list->size * 2;
+		list->data = repalloc(list->data, sizeof(Oid) * list->size);
+
+		for (j = i; j < list->size; j++)
+			list->data[j] = InvalidOid;
+	}
+	list->data[i] = oid;
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement.
+ * Used for internal recursive calls.
+ */
+static void
+TSMapGetDictionariesInternal(TSMapElement *config, OidList *list)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapGetDictionariesInternal(config->value.objectExpression->left, list);
+			TSMapGetDictionariesInternal(config->value.objectExpression->right, list);
+			break;
+		case TSMAP_CASE:
+			TSMapGetDictionariesInternal(config->value.objectCase->command, list);
+			TSMapGetDictionariesInternal(config->value.objectCase->condition, list);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapGetDictionariesInternal(config->value.objectCase->elsebranch, list);
+			break;
+		case TSMAP_DICTIONARY:
+			OidListAdd(list, config->value.objectDictionary);
+			break;
+	}
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement
+ */
+Oid *
+TSMapGetDictionaries(TSMapElement *config)
+{
+	Oid		   *result;
+	OidList    *list = OidListInit();
+
+	TSMapGetDictionariesInternal(config, list);
+
+	result = list->data;
+	pfree(list);
+
+	return result;
+}
+
+/*
+ * Replace one dictionary OID with another in all instances inside a configuration
+ */
+void
+TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapReplaceDictionary(config->value.objectExpression->left, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectExpression->right, oldDict, newDict);
+			break;
+		case TSMAP_CASE:
+			TSMapReplaceDictionary(config->value.objectCase->command, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectCase->condition, oldDict, newDict);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapReplaceDictionary(config->value.objectCase->elsebranch, oldDict, newDict);
+			break;
+		case TSMAP_DICTIONARY:
+			if (config->value.objectDictionary == oldDict)
+				config->value.objectDictionary = newDict;
+			break;
+	}
+}
+
+/* ----------------
+ * Text Search Configuration Map Memory Management
+ * ----------------
+ */
+
+/*
+ * Move a FTS configuration expression to another memory context
+ */
+static TSMapElement *
+TSMapExpressionMoveToMemoryContext(TSMapExpression *expression, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapExpression *resultExpression = MemoryContextAlloc(context, sizeof(TSMapExpression));
+
+	memset(resultExpression, 0, sizeof(TSMapExpression));
+	result->value.objectExpression = resultExpression;
+	result->type = TSMAP_EXPRESSION;
+
+	resultExpression->operator = expression->operator;
+
+	resultExpression->left = TSMapMoveToMemoryContext(expression->left, context);
+	resultExpression->left->parent = result;
+
+	resultExpression->right = TSMapMoveToMemoryContext(expression->right, context);
+	resultExpression->right->parent = result;
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration case to another memory context
+ */
+static TSMapElement *
+TSMapCaseMoveToMemoryContext(TSMapCase *caseObject, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapCase  *resultCaseObject = MemoryContextAlloc(context, sizeof(TSMapCase));
+
+	memset(resultCaseObject, 0, sizeof(TSMapCase));
+	result->value.objectCase = resultCaseObject;
+	result->type = TSMAP_CASE;
+
+	resultCaseObject->match = caseObject->match;
+
+	resultCaseObject->command = TSMapMoveToMemoryContext(caseObject->command, context);
+	resultCaseObject->command->parent = result;
+
+	resultCaseObject->condition = TSMapMoveToMemoryContext(caseObject->condition, context);
+	resultCaseObject->condition->parent = result;
+
+	if (caseObject->elsebranch != NULL)
+	{
+		resultCaseObject->elsebranch = TSMapMoveToMemoryContext(caseObject->elsebranch, context);
+		resultCaseObject->elsebranch->parent = result;
+	}
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration to another memory context
+ */
+TSMapElement *
+TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context)
+{
+	TSMapElement *result = NULL;
+
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			result = TSMapExpressionMoveToMemoryContext(config->value.objectExpression, context);
+			break;
+		case TSMAP_CASE:
+			result = TSMapCaseMoveToMemoryContext(config->value.objectCase, context);
+			break;
+		case TSMAP_DICTIONARY:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_DICTIONARY;
+			result->value.objectDictionary = config->value.objectDictionary;
+			break;
+		case TSMAP_KEEP:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_KEEP;
+			result->value.object = NULL;
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains object with invalid type.")));
+			break;
+	}
+
+	return result;
+}
+
+/*
+ * Free memory occupied by FTS configuration expression
+ */
+static void
+TSMapExpressionFree(TSMapExpression *expression)
+{
+	if (expression->left)
+		TSMapElementFree(expression->left);
+	if (expression->right)
+		TSMapElementFree(expression->right);
+	pfree(expression);
+}
+
+/*
+ * Free memory occupied by FTS configuration case
+ */
+static void
+TSMapCaseFree(TSMapCase *caseObject)
+{
+	TSMapElementFree(caseObject->condition);
+	TSMapElementFree(caseObject->command);
+	TSMapElementFree(caseObject->elsebranch);
+	pfree(caseObject);
+}
+
+/*
+ * Free memory occupied by FTS configuration element
+ */
+void
+TSMapElementFree(TSMapElement *element)
+{
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_CASE:
+				TSMapCaseFree(element->value.objectCase);
+				break;
+			case TSMAP_EXPRESSION:
+				TSMapExpressionFree(element->value.objectExpression);
+				break;
+		}
+		pfree(element);
+	}
+}
+
+/*
+ * Do a deep comparison of two TSMapElements. Doesn't check parents of elements
+ */
+bool
+TSMapElementEquals(TSMapElement *a, TSMapElement *b)
+{
+	bool		result = true;
+
+	if (a->type == b->type)
+	{
+		switch (a->type)
+		{
+			case TSMAP_CASE:
+				if (!TSMapElementEquals(a->value.objectCase->condition, b->value.objectCase->condition))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectCase->command, b->value.objectCase->command))
+					result = false;
+
+				if (a->value.objectCase->elsebranch != NULL && b->value.objectCase->elsebranch != NULL)
+				{
+					if (!TSMapElementEquals(a->value.objectCase->elsebranch, b->value.objectCase->elsebranch))
+						result = false;
+				}
+				else if (a->value.objectCase->elsebranch != NULL || b->value.objectCase->elsebranch != NULL)
+					result = false;
+
+				if (a->value.objectCase->match != b->value.objectCase->match)
+					result = false;
+				break;
+			case TSMAP_EXPRESSION:
+				if (!TSMapElementEquals(a->value.objectExpression->left, b->value.objectExpression->left))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectExpression->right, b->value.objectExpression->right))
+					result = false;
+				if (a->value.objectExpression->operator != b->value.objectExpression->operator)
+					result = false;
+				break;
+			case TSMAP_DICTIONARY:
+				result = a->value.objectDictionary == b->value.objectDictionary;
+				break;
+			case TSMAP_KEEP:
+				result = true;
+		}
+	}
+	else
+		result = false;
+
+	return result;
+}
diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c
index 7b69ef5..f476abb 100644
--- a/src/backend/tsearch/ts_parse.c
+++ b/src/backend/tsearch/ts_parse.c
@@ -16,58 +16,157 @@
 
 #include "tsearch/ts_cache.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "funcapi.h"
 
 #define IGNORE_LONGLEXEME	1
 
-/*
+/*-------------------
  * Lexize subsystem
+ *-------------------
  */
 
+/*
+ * Representation of token produced by FTS parser. It contains intermediate
+ * lexemes in case of phrase dictionary processing.
+ */
 typedef struct ParsedLex
 {
-	int			type;
-	char	   *lemm;
-	int			lenlemm;
-	struct ParsedLex *next;
+	int			type;			/* Token type */
+	char	   *lemm;			/* Token itself */
+	int			lenlemm;		/* Length of the token string */
+	int			maplen;			/* Length of the map */
+	bool	   *accepted;		/* Is accepted by some dictionary */
+	bool	   *rejected;		/* Is rejected by all dictionaries */
+	bool	   *notFinished;	/* Some dictionary not finished processing and
+								 * waits for more tokens */
+	struct ParsedLex *next;		/* Next token in the list */
+	TSMapElement *relatedRule;	/* Rule which is used to produce lexemes from
+								 * the token */
 } ParsedLex;
 
+/*
+ * List of tokens produced by FTS parser.
+ */
 typedef struct ListParsedLex
 {
 	ParsedLex  *head;
 	ParsedLex  *tail;
 } ListParsedLex;
 
-typedef struct
+/*
+ * Dictionary state shared between processing of different tokens
+ */
+typedef struct DictState
 {
-	TSConfigCacheEntry *cfg;
-	Oid			curDictId;
-	int			posDict;
-	DictSubState dictState;
-	ParsedLex  *curSub;
-	ListParsedLex towork;		/* current list to work */
-	ListParsedLex waste;		/* list of lexemes that already lexized */
+	Oid			relatedDictionary;	/* DictState contains state of dictionary
+									 * with this Oid */
+	DictSubState subState;		/* Internal state of the dictionary used to
+								 * store some state between dictionary calls */
+	ListParsedLex acceptedTokens;	/* Tokens which are processed and
+									 * accepted, used in last returned result
+									 * by the dictionary */
+	ListParsedLex intermediateTokens;	/* Tokens which are not accepted, but
+										 * were processed by thesaurus-like
+										 * dictionary */
+	bool		storeToAccepted;	/* Should current token be appended to
+									 * accepted or intermediate tokens */
+	bool		processed;		/* Is the dictionary take control during
+								 * current token processing */
+	TSLexeme   *tmpResult;		/* Last result returned by thesaurus-like
+								 * dictionary, if dictionary still waiting for
+								 * more lexemes */
+} DictState;
 
-	/*
-	 * fields to store last variant to lexize (basically, thesaurus or similar
-	 * to, which wants	several lexemes
-	 */
+/*
+ * List of dictionary states
+ */
+typedef struct DictStateList
+{
+	int			listLength;
+	DictState  *states;
+} DictStateList;
 
-	ParsedLex  *lastRes;
-	TSLexeme   *tmpRes;
+/*
+ * Buffer entry with lexemes produced from current token
+ */
+typedef struct LexemesBufferEntry
+{
+	TSMapElement *key;	/* Element of the mapping configuration produced the entry */
+	ParsedLex  *token;	/* Token used for production of the lexemes */
+	TSLexeme   *data;	/* Lexemes produced from current token */
+} LexemesBufferEntry;
+
+/*
+ * Buffer with lexemes produced from current token
+ */
+typedef struct LexemesBuffer
+{
+	int			size;
+	LexemesBufferEntry *data;
+} LexemesBuffer;
+
+/*
+ * Storage for accepted and possible accepted lexemes
+ */
+typedef struct ResultStorage
+{
+	TSLexeme   *lexemes;		/* Processed lexemes, which is not yet
+								 * accepted */
+	TSLexeme   *accepted;		/* Already accepted lexemes */
+} ResultStorage;
+
+/*
+ * FTS processing context
+ */
+typedef struct LexizeData
+{
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	DictStateList dslist;		/* List of all currently stored states of
+								 * dictionaries */
+	ListParsedLex towork;		/* Current list to work */
+	ListParsedLex waste;		/* List of lexemes that already lexized */
+	LexemesBuffer buffer;		/* Buffer of processed lexemes. Used to avoid
+								 * multiple execution of token lexize process
+								 * with same parameters */
+	ResultStorage delayedResults;	/* Results that should be returned but may
+									 * be rejected in future */
+	Oid			skipDictionary; /* The dictionary we should skip during
+								 * processing. Used to avoid infinite loop in
+								 * configuration with phrase dictionary */
+	bool		debugContext;	/* If true, relatedRule attribute is filled */
 } LexizeData;
 
-static void
-LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+/*
+ * FTS processing debug context. Used during ts_debug calls.
+ */
+typedef struct TSDebugContext
 {
-	ld->cfg = cfg;
-	ld->curDictId = InvalidOid;
-	ld->posDict = 0;
-	ld->towork.head = ld->towork.tail = ld->curSub = NULL;
-	ld->waste.head = ld->waste.tail = NULL;
-	ld->lastRes = NULL;
-	ld->tmpRes = NULL;
-}
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	TSParserCacheEntry *prsobj; /* Parser context of current ts_debug context */
+	LexDescr   *tokenTypes;		/* Token types supported by current parser */
+	void	   *prsdata;		/* Parser data of current ts_debug context */
+	LexizeData	ldata;			/* Lexize data of current ts_debug context */
+	int			tokentype;		/* Last token tokentype */
+	TSLexeme   *savedLexemes;	/* Last token lexemes stored for ts_debug
+								 * output */
+	ParsedLex  *leftTokens;		/* Corresponded ParsedLex */
+} TSDebugContext;
+
+static TSLexeme *TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression);
+static TSLexeme *LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config);
+
+/*-------------------
+ * ListParsedLex API
+ *-------------------
+ */
 
+/*
+ * Add a ParsedLex to the end of the list
+ */
 static void
 LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 {
@@ -81,274 +180,1291 @@ LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 	newpl->next = NULL;
 }
 
-static ParsedLex *
-LPLRemoveHead(ListParsedLex *list)
-{
-	ParsedLex  *res = list->head;
+/*
+ * Add a copy of ParsedLex to the end of the list
+ */
+static void
+LPLAddTailCopy(ListParsedLex *list, ParsedLex *newpl)
+{
+	ParsedLex  *copy = palloc0(sizeof(ParsedLex));
+
+	copy->lenlemm = newpl->lenlemm;
+	copy->type = newpl->type;
+	copy->lemm = newpl->lemm;
+	copy->relatedRule = newpl->relatedRule;
+	copy->next = NULL;
+
+	if (list->tail)
+	{
+		list->tail->next = copy;
+		list->tail = copy;
+	}
+	else
+		list->head = list->tail = copy;
+}
+
+/*
+ * Remove the head of the list. Return pointer to detached head
+ */
+static ParsedLex *
+LPLRemoveHead(ListParsedLex *list)
+{
+	ParsedLex  *res = list->head;
+
+	if (list->head)
+		list->head = list->head->next;
+
+	if (list->head == NULL)
+		list->tail = NULL;
+
+	return res;
+}
+
+/*
+ * Remove all ParsedLex from the list
+ */
+static void
+LPLClear(ListParsedLex *list)
+{
+	ParsedLex  *tmp,
+			   *ptr = list->head;
+
+	while (ptr)
+	{
+		tmp = ptr->next;
+		pfree(ptr);
+		ptr = tmp;
+	}
+
+	list->head = list->tail = NULL;
+}
+
+/*-------------------
+ * LexizeData manipulation functions
+ *-------------------
+ */
+
+/*
+ * Initialize empty LexizeData object
+ */
+static void
+LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+{
+	ld->cfg = cfg;
+	ld->skipDictionary = InvalidOid;
+	ld->towork.head = ld->towork.tail = NULL;
+	ld->waste.head = ld->waste.tail = NULL;
+	ld->dslist.listLength = 0;
+	ld->dslist.states = NULL;
+	ld->buffer.size = 0;
+	ld->buffer.data = NULL;
+	ld->delayedResults.lexemes = NULL;
+	ld->delayedResults.accepted = NULL;
+}
+
+/*
+ * Add a token to the processing queue
+ */
+static void
+LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
+{
+	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+
+	newpl->type = type;
+	newpl->lemm = lemm;
+	newpl->lenlemm = lenlemm;
+	newpl->relatedRule = NULL;
+	LPLAddTail(&ld->towork, newpl);
+}
+
+/*
+ * Remove head of the processing queue
+ */
+static void
+RemoveHead(LexizeData *ld)
+{
+	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+}
+
+/*
+ * Set token corresponded to current lexeme
+ */
+static void
+setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+{
+	if (correspondLexem)
+		*correspondLexem = ld->waste.head;
+	else
+		LPLClear(&ld->waste);
+
+	ld->waste.head = ld->waste.tail = NULL;
+}
+
+/*-------------------
+ * DictState manipulation functions
+ *-------------------
+ */
+
+/*
+ * Get a state of dictionary based on its OID
+ */
+static DictState *
+DictStateListGet(DictStateList *list, Oid dictId)
+{
+	int			i;
+	DictState  *result = NULL;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			result = &list->states[i];
+
+	return result;
+}
+
+/*
+ * Remove a state of dictionary based on its OID
+ */
+static void
+DictStateListRemove(DictStateList *list, Oid dictId)
+{
+	int			i;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			break;
+
+	if (i != list->listLength)
+	{
+		memcpy(list->states + i, list->states + i + 1, sizeof(DictState) * (list->listLength - i - 1));
+		list->listLength--;
+		if (list->listLength == 0)
+			list->states = NULL;
+		else
+			list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	}
+}
+
+/*
+ * Insert a state of dictionary with specified OID
+ */
+static DictState *
+DictStateListAdd(DictStateList *list, DictState *state)
+{
+	DictStateListRemove(list, state->relatedDictionary);
+
+	list->listLength++;
+	if (list->states)
+		list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	else
+		list->states = palloc0(sizeof(DictState) * list->listLength);
+
+	memcpy(list->states + list->listLength - 1, state, sizeof(DictState));
+
+	return list->states + list->listLength - 1;
+}
+
+/*
+ * Remove states of all dictionaries
+ */
+static void
+DictStateListClear(DictStateList *list)
+{
+	list->listLength = 0;
+	if (list->states)
+		pfree(list->states);
+	list->states = NULL;
+}
+
+/*-------------------
+ * LexemesBuffer manipulation functions
+ *-------------------
+ */
+
+/*
+ * Check if there is a saved lexeme generated by specified TSMapElement
+ */
+static bool
+LexemesBufferContains(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			return true;
+
+	return false;
+}
+
+/*
+ * Get a saved lexeme generated by specified TSMapElement
+ */
+static TSLexeme *
+LexemesBufferGet(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+	TSLexeme   *result = NULL;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			result = buffer->data[i].data;
+
+	return result;
+}
+
+/*
+ * Remove a saved lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferRemove(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			break;
+
+	if (i != buffer->size)
+	{
+		memcpy(buffer->data + i, buffer->data + i + 1, sizeof(LexemesBufferEntry) * (buffer->size - i - 1));
+		buffer->size--;
+		if (buffer->size == 0)
+			buffer->data = NULL;
+		else
+			buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	}
+}
+
+/*
+ * Same a lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferAdd(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token, TSLexeme *data)
+{
+	LexemesBufferRemove(buffer, key, token);
+
+	buffer->size++;
+	if (buffer->data)
+		buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	else
+		buffer->data = palloc0(sizeof(LexemesBufferEntry) * buffer->size);
+
+	buffer->data[buffer->size - 1].token = token;
+	buffer->data[buffer->size - 1].key = key;
+	buffer->data[buffer->size - 1].data = data;
+}
+
+/*
+ * Remove all lexemes saved in a buffer
+ */
+static void
+LexemesBufferClear(LexemesBuffer *buffer)
+{
+	int			i;
+	bool	   *skipEntry = palloc0(sizeof(bool) * buffer->size);
+
+	for (i = 0; i < buffer->size; i++)
+	{
+		if (buffer->data[i].data != NULL && !skipEntry[i])
+		{
+			int			j;
+
+			for (j = 0; j < buffer->size; j++)
+				if (buffer->data[i].data == buffer->data[j].data)
+					skipEntry[j] = true;
+
+			pfree(buffer->data[i].data);
+		}
+	}
+
+	buffer->size = 0;
+	if (buffer->data)
+		pfree(buffer->data);
+	buffer->data = NULL;
+}
+
+/*-------------------
+ * TSLexeme util functions
+ *-------------------
+ */
+
+/*
+ * Get size of TSLexeme except empty-lexeme
+ */
+static int
+TSLexemeGetSize(TSLexeme *lex)
+{
+	int			result = 0;
+	TSLexeme   *ptr = lex;
+
+	while (ptr && ptr->lexeme)
+	{
+		result++;
+		ptr++;
+	}
+
+	return result;
+}
+
+/*
+ * Remove repeated lexemes. Also remove copies of whole nvariant groups.
+ */
+static TSLexeme *
+TSLexemeRemoveDuplications(TSLexeme *lexeme)
+{
+	TSLexeme   *res;
+	int			curLexIndex;
+	int			i;
+	int			lexemeSize = TSLexemeGetSize(lexeme);
+	int			shouldCopyCount = lexemeSize;
+	bool	   *shouldCopy;
+
+	if (lexeme == NULL)
+		return NULL;
+
+	shouldCopy = palloc(sizeof(bool) * lexemeSize);
+	memset(shouldCopy, true, sizeof(bool) * lexemeSize);
+
+	for (curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		for (i = curLexIndex + 1; i < lexemeSize; i++)
+		{
+			if (!shouldCopy[i])
+				continue;
+
+			if (strcmp(lexeme[curLexIndex].lexeme, lexeme[i].lexeme) == 0)
+			{
+				if (lexeme[curLexIndex].nvariant == lexeme[i].nvariant)
+				{
+					shouldCopy[i] = false;
+					shouldCopyCount--;
+					continue;
+				}
+				else
+				{
+					/*
+					 * Check for same set of lexemes in another nvariant
+					 * series
+					 */
+					int			nvariantCountL = 0;
+					int			nvariantCountR = 0;
+					int			nvariantOverlap = 1;
+					int			j;
+
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[curLexIndex].nvariant == lexeme[j].nvariant)
+							nvariantCountL++;
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[i].nvariant == lexeme[j].nvariant)
+							nvariantCountR++;
+
+					if (nvariantCountL != nvariantCountR)
+						continue;
+
+					for (j = 1; j < nvariantCountR; j++)
+					{
+						if (strcmp(lexeme[curLexIndex + j].lexeme, lexeme[i + j].lexeme) == 0
+							&& lexeme[curLexIndex + j].nvariant == lexeme[i + j].nvariant)
+							nvariantOverlap++;
+					}
+
+					if (nvariantOverlap != nvariantCountR)
+						continue;
+
+					for (j = 0; j < nvariantCountR; j++)
+						shouldCopy[i + j] = false;
+				}
+			}
+		}
+	}
+
+	res = palloc0(sizeof(TSLexeme) * (shouldCopyCount + 1));
+
+	for (i = 0, curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		if (shouldCopy[curLexIndex])
+		{
+			memcpy(res + i, lexeme + curLexIndex, sizeof(TSLexeme));
+			i++;
+		}
+	}
+
+	pfree(shouldCopy);
+	return res;
+}
+
+/*
+ * Combine two lexeme lists with respect to positions
+ */
+static TSLexeme *
+TSLexemeMergePositions(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+
+	if (left != NULL || right != NULL)
+	{
+		int			left_i = 0;
+		int			right_i = 0;
+		int			left_max_nvariant = 0;
+		int			i;
+		int			left_size = TSLexemeGetSize(left);
+		int			right_size = TSLexemeGetSize(right);
+
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		for (i = 0; i < right_size; i++)
+			right[i].nvariant += left_max_nvariant;
+		if (right && right[0].flags & TSL_ADDPOS)
+			right[0].flags &= ~TSL_ADDPOS;
+
+		i = 0;
+		while (i < left_size + right_size)
+		{
+			if (left_i < left_size)
+			{
+				do
+				{
+					result[i++] = left[left_i++];
+				} while (left && left[left_i].lexeme && (left[left_i].flags & TSL_ADDPOS) == 0);
+			}
+
+			if (right_i < right_size)
+			{
+				do
+				{
+					result[i++] = right[right_i++];
+				} while (right && right[right_i].lexeme && (right[right_i].flags & TSL_ADDPOS) == 0);
+			}
+		}
+	}
+	return result;
+}
+
+/*
+ * Split lexemes generated by regular dictionaries and multi-input dictionaries
+ * and combine them with respect to positions
+ */
+static TSLexeme *
+TSLexemeFilterMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *result;
+	TSLexeme   *ptr = lexemes;
+	int			multi_lexemes = 0;
+
+	while (ptr && ptr->lexeme)
+	{
+		if (ptr->flags & TSL_MULTI)
+			multi_lexemes++;
+		ptr++;
+	}
+
+	if (multi_lexemes > 0)
+	{
+		TSLexeme   *lexemes_multi = palloc0(sizeof(TSLexeme) * (multi_lexemes + 1));
+		TSLexeme   *lexemes_rest = palloc0(sizeof(TSLexeme) * (TSLexemeGetSize(lexemes) - multi_lexemes + 1));
+		int			rest_i = 0;
+		int			multi_i = 0;
+
+		ptr = lexemes;
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr->flags & TSL_MULTI)
+				lexemes_multi[multi_i++] = *ptr;
+			else
+				lexemes_rest[rest_i++] = *ptr;
+
+			ptr++;
+		}
+		result = TSLexemeMergePositions(lexemes_rest, lexemes_multi);
+	}
+	else
+	{
+		result = TSLexemeMergePositions(lexemes, NULL);
+	}
+
+	return result;
+}
+
+/*
+ * Mark lexemes as generated by multi-input (thesaurus-like) dictionary
+ */
+static void
+TSLexemeMarkMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *ptr = lexemes;
+
+	while (ptr && ptr->lexeme)
+	{
+		ptr->flags |= TSL_MULTI;
+		ptr++;
+	}
+}
+
+/*-------------------
+ * Lexemes set operations
+ *-------------------
+ */
+
+/*
+ * Combine left and right lexeme lists into one.
+ * If append is true, right lexemes added after last left lexeme with TSL_ADDPOS flag
+ */
+static TSLexeme *
+TSLexemeUnionOpt(TSLexeme *left, TSLexeme *right, bool append)
+{
+	TSLexeme   *result;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+	int			left_max_nvariant = 0;
+	int			i;
+
+	if (left == NULL && right == NULL)
+	{
+		result = NULL;
+	}
+	else
+	{
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		if (left_size > 0)
+			memcpy(result, left, sizeof(TSLexeme) * left_size);
+		if (right_size > 0)
+			memcpy(result + left_size, right, sizeof(TSLexeme) * right_size);
+		if (append && left_size > 0 && right_size > 0)
+			result[left_size].flags |= TSL_ADDPOS;
+
+		for (i = left_size; i < left_size + right_size; i++)
+			result[i].nvariant += left_max_nvariant;
+	}
+
+	return result;
+}
+
+/*
+ * Combine left and right lexeme lists into one
+ */
+static TSLexeme *
+TSLexemeUnion(TSLexeme *left, TSLexeme *right)
+{
+	return TSLexemeUnionOpt(left, right, false);
+}
+
+/*
+ * Remove common lexemes and return only which is stored in left list
+ */
+static TSLexeme *
+TSLexemeExcept(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (!found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*
+ * Keep only common lexemes
+ */
+static TSLexeme *
+TSLexemeIntersect(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*-------------------
+ * Result storage functions
+ *-------------------
+ */
+
+/*
+ * Add a lexeme to the result storage
+ */
+static void
+ResultStorageAdd(ResultStorage *storage, ParsedLex *token, TSLexeme *lexs)
+{
+	TSLexeme   *oldLexs = storage->lexemes;
+
+	storage->lexemes = TSLexemeUnionOpt(storage->lexemes, lexs, true);
+	if (oldLexs)
+		pfree(oldLexs);
+}
+
+/*
+ * Move all saved lexemes to accepted list
+ */
+static void
+ResultStorageMoveToAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+	{
+		TSLexeme   *prevAccepted = storage->accepted;
+
+		storage->accepted = TSLexemeUnionOpt(storage->accepted, storage->lexemes, true);
+		if (prevAccepted)
+			pfree(prevAccepted);
+		if (storage->lexemes)
+			pfree(storage->lexemes);
+	}
+	else
+	{
+		storage->accepted = storage->lexemes;
+	}
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all non-accepted lexemes
+ */
+static void
+ResultStorageClearLexemes(ResultStorage *storage)
+{
+	if (storage->lexemes)
+		pfree(storage->lexemes);
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all accepted lexemes
+ */
+static void
+ResultStorageClearAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+		pfree(storage->accepted);
+	storage->accepted = NULL;
+}
+
+/*-------------------
+ * Condition and command execution
+ *-------------------
+ */
+
+/*
+ * Process a token by the dictionary
+ */
+static TSLexeme *
+LexizeExecDictionary(LexizeData *ld, ParsedLex *token, TSMapElement *dictionary)
+{
+	TSLexeme   *res;
+	TSDictionaryCacheEntry *dict;
+	DictSubState subState;
+	Oid			dictId = dictionary->value.objectDictionary;
+
+	if (ld->skipDictionary == dictId)
+		return NULL;
+
+	if (LexemesBufferContains(&ld->buffer, dictionary, token))
+		res = LexemesBufferGet(&ld->buffer, dictionary, token);
+	else
+	{
+		char	   *curValLemm = token->lemm;
+		int			curValLenLemm = token->lenlemm;
+		DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+		dict = lookup_ts_dictionary_cache(dictId);
+
+		if (state)
+		{
+			subState = state->subState;
+			state->processed = true;
+		}
+		else
+		{
+			subState.isend = subState.getnext = false;
+			subState.private_state = NULL;
+		}
+
+		res = (TSLexeme *) DatumGetPointer(FunctionCall4(&(dict->lexize),
+														 PointerGetDatum(dict->dictData),
+														 PointerGetDatum(curValLemm),
+														 Int32GetDatum(curValLenLemm),
+														 PointerGetDatum(&subState)
+														 ));
+
+		if (subState.getnext)
+		{
+			/*
+			 * Dictionary wants next word, so store current context and state
+			 * in the DictStateList
+			 */
+			if (state == NULL)
+			{
+				state = palloc0(sizeof(DictState));
+				state->processed = true;
+				state->relatedDictionary = dictId;
+				state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				state->acceptedTokens.head = state->acceptedTokens.tail = NULL;
+				state->tmpResult = NULL;
+
+				/*
+				 * Add state to the list and update pointer in order to work
+				 * with copy from the list
+				 */
+				state = DictStateListAdd(&ld->dslist, state);
+			}
+
+			state->subState = subState;
+			state->storeToAccepted = res != NULL;
+
+			if (res)
+			{
+				if (state->intermediateTokens.head != NULL)
+				{
+					ParsedLex  *ptr = state->intermediateTokens.head;
+
+					while (ptr)
+					{
+						LPLAddTailCopy(&state->acceptedTokens, ptr);
+						ptr = ptr->next;
+					}
+					state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				}
+
+				if (state->tmpResult)
+					pfree(state->tmpResult);
+				TSLexemeMarkMulti(res);
+				state->tmpResult = res;
+				res = NULL;
+			}
+		}
+		else if (state != NULL)
+		{
+			if (res)
+			{
+				if (state)
+					TSLexemeMarkMulti(res);
+				DictStateListRemove(&ld->dslist, dictId);
+			}
+			else
+			{
+				/*
+				 * Trigger post-processing in order to check tmpResult and
+				 * restart processing (see LexizeExec function)
+				 */
+				state->processed = false;
+			}
+		}
+		LexemesBufferAdd(&ld->buffer, dictionary, token, res);
+	}
+
+	return res;
+}
+
+/*
+ * Check is dictionary waits for more tokens or not
+ */
+static bool
+LexizeExecDictionaryWaitNext(LexizeData *ld, Oid dictId)
+{
+	DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+	if (state)
+		return state->subState.getnext;
+	else
+		return false;
+}
+
+/*
+ * Check is dictionary result for current token is NULL or not.
+ * It dictionary waits for more lexemes, the result is interpreted as not null.
+ */
+static bool
+LexizeExecIsNull(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	bool		result = false;
+
+	if (config->type == TSMAP_EXPRESSION)
+	{
+		TSMapExpression *expression = config->value.objectExpression;
+
+		result = LexizeExecIsNull(ld, token, expression->left) || LexizeExecIsNull(ld, token, expression->right);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		Oid			dictOid = config->value.objectDictionary;
+		TSLexeme   *lexemes = LexizeExecDictionary(ld, token, config);
+
+		if (lexemes)
+			result = false;
+		else
+			result = !LexizeExecDictionaryWaitNext(ld, dictOid);
+	}
+	return result;
+}
+
+/*
+ * Execute a MAP operator
+ */
+static TSLexeme *
+TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression)
+{
+	TSLexeme   *left_res;
+	TSLexeme   *result = NULL;
+	int			left_size;
+	int			i;
+
+	left_res = LexizeExecTSElement(ld, token, expression->left);
+	left_size = TSLexemeGetSize(left_res);
+
+	if (left_res == NULL && LexizeExecIsNull(ld, token, expression->left))
+		result = LexizeExecTSElement(ld, token, expression->right);
+	else if (expression->operator == TSMAP_OP_COMMA &&
+			((left_res != NULL && (left_res->flags & TSL_FILTER) == 0) || left_res == NULL))
+		result = left_res;
+	else
+	{
+		TSMapElement *relatedRuleTmp = NULL;
+		relatedRuleTmp = palloc0(sizeof(TSMapElement));
+		relatedRuleTmp->parent = NULL;
+		relatedRuleTmp->type = TSMAP_EXPRESSION;
+		relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+		relatedRuleTmp->value.objectExpression->operator = expression->operator;
+		relatedRuleTmp->value.objectExpression->left = token->relatedRule;
+
+		for (i = 0; i < left_size; i++)
+		{
+			TSLexeme   *tmp_res = NULL;
+			TSLexeme   *prev_res;
+			ParsedLex	tmp_token;
+
+			tmp_token.lemm = left_res[i].lexeme;
+			tmp_token.lenlemm = strlen(left_res[i].lexeme);
+			tmp_token.type = token->type;
+			tmp_token.next = NULL;
+
+			tmp_res = LexizeExecTSElement(ld, &tmp_token, expression->right);
+			relatedRuleTmp->value.objectExpression->right = tmp_token.relatedRule;
+			prev_res = result;
+			result = TSLexemeUnion(prev_res, tmp_res);
+			if (prev_res)
+				pfree(prev_res);
+		}
+		token->relatedRule = relatedRuleTmp;
+	}
+
+	return result;
+}
+
+/*
+ * Execute a TSMapElement
+ * Common point of all possible types of TSMapElement
+ */
+static TSLexeme *
+LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	TSLexeme   *result = NULL;
+
+	if (LexemesBufferContains(&ld->buffer, config, token))
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexemesBufferGet(&ld->buffer, config, token);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexizeExecDictionary(ld, token, config);
+	}
+	else if (config->type == TSMAP_CASE)
+	{
+		TSMapCase  *caseObject = config->value.objectCase;
+		bool		conditionIsNull = LexizeExecIsNull(ld, token, caseObject->condition);
+
+		if ((!conditionIsNull && caseObject->match) || (conditionIsNull && !caseObject->match))
+		{
+			if (caseObject->command->type == TSMAP_KEEP)
+				result = LexizeExecTSElement(ld, token, caseObject->condition);
+			else
+				result = LexizeExecTSElement(ld, token, caseObject->command);
+		}
+		else if (caseObject->elsebranch)
+			result = LexizeExecTSElement(ld, token, caseObject->elsebranch);
+	}
+	else if (config->type == TSMAP_EXPRESSION)
+	{
+		TSLexeme   *resLeft = NULL;
+		TSLexeme   *resRight = NULL;
+		TSMapElement *relatedRuleTmp = NULL;
+		TSMapExpression *expression = config->value.objectExpression;
+
+		if (expression->operator != TSMAP_OP_MAP && expression->operator != TSMAP_OP_COMMA)
+		{
+			if (ld->debugContext)
+			{
+				relatedRuleTmp = palloc0(sizeof(TSMapElement));
+				relatedRuleTmp->parent = NULL;
+				relatedRuleTmp->type = TSMAP_EXPRESSION;
+				relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+				relatedRuleTmp->value.objectExpression->operator = expression->operator;
+			}
 
-	if (list->head)
-		list->head = list->head->next;
+			resLeft = LexizeExecTSElement(ld, token, expression->left);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->left = token->relatedRule;
 
-	if (list->head == NULL)
-		list->tail = NULL;
+			resRight = LexizeExecTSElement(ld, token, expression->right);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->right = token->relatedRule;
+		}
 
-	return res;
-}
+		switch (expression->operator)
+		{
+			case TSMAP_OP_UNION:
+				result = TSLexemeUnion(resLeft, resRight);
+				break;
+			case TSMAP_OP_EXCEPT:
+				result = TSLexemeExcept(resLeft, resRight);
+				break;
+			case TSMAP_OP_INTERSECT:
+				result = TSLexemeIntersect(resLeft, resRight);
+				break;
+			case TSMAP_OP_MAP:
+			case TSMAP_OP_COMMA:
+				result = TSLexemeMap(ld, token, expression);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains invalid expression operator.")));
+				break;
+		}
 
-static void
-LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
-{
-	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+		if (ld->debugContext && relatedRuleTmp != NULL)
+			token->relatedRule = relatedRuleTmp;
+	}
 
-	newpl->type = type;
-	newpl->lemm = lemm;
-	newpl->lenlemm = lenlemm;
-	LPLAddTail(&ld->towork, newpl);
-	ld->curSub = ld->towork.tail;
+	if (!LexemesBufferContains(&ld->buffer, config, token))
+		LexemesBufferAdd(&ld->buffer, config, token, result);
+
+	return result;
 }
 
-static void
-RemoveHead(LexizeData *ld)
+/*-------------------
+ * LexizeExec and helpers functions
+ *-------------------
+ */
+
+/*
+ * Processing of EOF-like token.
+ * Return all temporary results if any are saved.
+ */
+static TSLexeme *
+LexizeExecFinishProcessing(LexizeData *ld)
 {
-	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+	int			i;
+	TSLexeme   *res = NULL;
+
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		TSLexeme   *last_res = res;
 
-	ld->posDict = 0;
+		res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+		if (last_res)
+			pfree(last_res);
+	}
+
+	return res;
 }
 
-static void
-setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+/*
+ * Get last accepted result of the phrase-dictionary
+ */
+static TSLexeme *
+LexizeExecGetPreviousResults(LexizeData *ld)
 {
-	if (correspondLexem)
-	{
-		*correspondLexem = ld->waste.head;
-	}
-	else
-	{
-		ParsedLex  *tmp,
-				   *ptr = ld->waste.head;
+	int			i;
+	TSLexeme   *res = NULL;
 
-		while (ptr)
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		if (!ld->dslist.states[i].processed)
 		{
-			tmp = ptr->next;
-			pfree(ptr);
-			ptr = tmp;
+			TSLexeme   *last_res = res;
+
+			res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+			if (last_res)
+				pfree(last_res);
 		}
 	}
-	ld->waste.head = ld->waste.tail = NULL;
+
+	return res;
 }
 
+/*
+ * Remove all dictionary states which wasn't used for current token
+ */
 static void
-moveToWaste(LexizeData *ld, ParsedLex *stop)
+LexizeExecClearDictStates(LexizeData *ld)
 {
-	bool		go = true;
+	int			i;
 
-	while (ld->towork.head && go)
+	for (i = 0; i < ld->dslist.listLength; i++)
 	{
-		if (ld->towork.head == stop)
+		if (!ld->dslist.states[i].processed)
 		{
-			ld->curSub = stop->next;
-			go = false;
+			DictStateListRemove(&ld->dslist, ld->dslist.states[i].relatedDictionary);
+			i = 0;
 		}
-		RemoveHead(ld);
 	}
 }
 
-static void
-setNewTmpRes(LexizeData *ld, ParsedLex *lex, TSLexeme *res)
+/*
+ * Check if there are any dictionaries that didn't processed current token
+ */
+static bool
+LexizeExecNotProcessedDictStates(LexizeData *ld)
 {
-	if (ld->tmpRes)
-	{
-		TSLexeme   *ptr;
+	int			i;
 
-		for (ptr = ld->tmpRes; ptr->lexeme; ptr++)
-			pfree(ptr->lexeme);
-		pfree(ld->tmpRes);
-	}
-	ld->tmpRes = res;
-	ld->lastRes = lex;
+	for (i = 0; i < ld->dslist.listLength; i++)
+		if (!ld->dslist.states[i].processed)
+			return true;
+
+	return false;
 }
 
+/*
+ * Do a lexize processing for a towork queue in LexizeData
+ */
 static TSLexeme *
 LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 {
+	ParsedLex  *token;
+	TSMapElement *config;
+	TSLexeme   *res = NULL;
+	TSLexeme   *prevIterationResult = NULL;
+	bool		removeHead = false;
+	bool		resetSkipDictionary = false;
+	bool		accepted = false;
 	int			i;
-	ListDictionary *map;
-	TSDictionaryCacheEntry *dict;
-	TSLexeme   *res;
 
-	if (ld->curDictId == InvalidOid)
+	for (i = 0; i < ld->dslist.listLength; i++)
+		ld->dslist.states[i].processed = false;
+	if (ld->skipDictionary != InvalidOid)
+		resetSkipDictionary = true;
+
+	token = ld->towork.head;
+	if (token == NULL)
 	{
-		/*
-		 * usual mode: dictionary wants only one word, but we should keep in
-		 * mind that we should go through all stack
-		 */
+		setCorrLex(ld, correspondLexem);
+		return NULL;
+	}
 
-		while (ld->towork.head)
+	if (token->type >= ld->cfg->lenmap)
+	{
+		removeHead = true;
+	}
+	else
+	{
+		config = ld->cfg->map[token->type];
+		if (config != NULL)
+		{
+			res = LexizeExecTSElement(ld, token, config);
+			prevIterationResult = LexizeExecGetPreviousResults(ld);
+			removeHead = prevIterationResult == NULL;
+		}
+		else
 		{
-			ParsedLex  *curVal = ld->towork.head;
-			char	   *curValLemm = curVal->lemm;
-			int			curValLenLemm = curVal->lenlemm;
+			removeHead = true;
+			if (token->type == 0)	/* Processing EOF-like token */
+			{
+				res = LexizeExecFinishProcessing(ld);
+				prevIterationResult = NULL;
+			}
+		}
 
-			map = ld->cfg->map + curVal->type;
+		if (LexizeExecNotProcessedDictStates(ld) && (token->type == 0 || config != NULL))	/* Rollback processing */
+		{
+			int			i;
+			ListParsedLex *intermediateTokens = NULL;
+			ListParsedLex *acceptedTokens = NULL;
 
-			if (curVal->type == 0 || curVal->type >= ld->cfg->lenmap || map->len == 0)
+			for (i = 0; i < ld->dslist.listLength; i++)
 			{
-				/* skip this type of lexeme */
-				RemoveHead(ld);
-				continue;
+				if (!ld->dslist.states[i].processed)
+				{
+					intermediateTokens = &ld->dslist.states[i].intermediateTokens;
+					acceptedTokens = &ld->dslist.states[i].acceptedTokens;
+					if (prevIterationResult == NULL)
+						ld->skipDictionary = ld->dslist.states[i].relatedDictionary;
+				}
 			}
 
-			for (i = ld->posDict; i < map->len; i++)
+			if (intermediateTokens && intermediateTokens->head)
 			{
-				dict = lookup_ts_dictionary_cache(map->dictIds[i]);
-
-				ld->dictState.isend = ld->dictState.getnext = false;
-				ld->dictState.private_state = NULL;
-				res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-																 &(dict->lexize),
-																 PointerGetDatum(dict->dictData),
-																 PointerGetDatum(curValLemm),
-																 Int32GetDatum(curValLenLemm),
-																 PointerGetDatum(&ld->dictState)
-																 ));
-
-				if (ld->dictState.getnext)
+				ParsedLex  *head = ld->towork.head;
+
+				ld->towork.head = intermediateTokens->head;
+				intermediateTokens->tail->next = head;
+				head->next = NULL;
+				ld->towork.tail = head;
+				removeHead = false;
+				LPLClear(&ld->waste);
+				if (acceptedTokens && acceptedTokens->head)
 				{
-					/*
-					 * dictionary wants next word, so setup and store current
-					 * position and go to multiword mode
-					 */
-
-					ld->curDictId = DatumGetObjectId(map->dictIds[i]);
-					ld->posDict = i + 1;
-					ld->curSub = curVal->next;
-					if (res)
-						setNewTmpRes(ld, curVal, res);
-					return LexizeExec(ld, correspondLexem);
+					ld->waste.head = acceptedTokens->head;
+					ld->waste.tail = acceptedTokens->tail;
 				}
+			}
+			ResultStorageClearLexemes(&ld->delayedResults);
+			if (config != NULL)
+				res = NULL;
+		}
 
-				if (!res)		/* dictionary doesn't know this lexeme */
-					continue;
+		if (config != NULL)
+			LexizeExecClearDictStates(ld);
+		else if (token->type == 0)
+			DictStateListClear(&ld->dslist);
+	}
 
-				if (res->flags & TSL_FILTER)
-				{
-					curValLemm = res->lexeme;
-					curValLenLemm = strlen(res->lexeme);
-					continue;
-				}
+	if (prevIterationResult)
+		res = prevIterationResult;
+	else
+	{
+		int			i;
 
-				RemoveHead(ld);
-				setCorrLex(ld, correspondLexem);
-				return res;
+		for (i = 0; i < ld->dslist.listLength; i++)
+		{
+			if (ld->dslist.states[i].storeToAccepted)
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].acceptedTokens, token);
+				accepted = true;
+				ld->dslist.states[i].storeToAccepted = false;
+			}
+			else
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].intermediateTokens, token);
 			}
-
-			RemoveHead(ld);
 		}
 	}
-	else
-	{							/* curDictId is valid */
-		dict = lookup_ts_dictionary_cache(ld->curDictId);
 
+	if (removeHead)
+		RemoveHead(ld);
+
+	if (ld->dslist.listLength > 0)
+	{
 		/*
-		 * Dictionary ld->curDictId asks  us about following words
+		 * There is at least one thesaurus dictionary in the middle of
+		 * processing. Delay return of the result to avoid wrong lexemes in
+		 * case of thesaurus phrase rejection.
 		 */
+		ResultStorageAdd(&ld->delayedResults, token, res);
+		if (accepted)
+			ResultStorageMoveToAccepted(&ld->delayedResults);
 
-		while (ld->curSub)
+		/*
+		 * Current value of res should not be cleared, because it is stored in
+		 * LexemesBuffer
+		 */
+		res = NULL;
+	}
+	else
+	{
+		if (ld->towork.head == NULL)
 		{
-			ParsedLex  *curVal = ld->curSub;
-
-			map = ld->cfg->map + curVal->type;
-
-			if (curVal->type != 0)
-			{
-				bool		dictExists = false;
-
-				if (curVal->type >= ld->cfg->lenmap || map->len == 0)
-				{
-					/* skip this type of lexeme */
-					ld->curSub = curVal->next;
-					continue;
-				}
+			TSLexeme   *oldAccepted = ld->delayedResults.accepted;
 
-				/*
-				 * We should be sure that current type of lexeme is recognized
-				 * by our dictionary: we just check is it exist in list of
-				 * dictionaries ?
-				 */
-				for (i = 0; i < map->len && !dictExists; i++)
-					if (ld->curDictId == DatumGetObjectId(map->dictIds[i]))
-						dictExists = true;
-
-				if (!dictExists)
-				{
-					/*
-					 * Dictionary can't work with current tpe of lexeme,
-					 * return to basic mode and redo all stored lexemes
-					 */
-					ld->curDictId = InvalidOid;
-					return LexizeExec(ld, correspondLexem);
-				}
-			}
+			ld->delayedResults.accepted = TSLexemeUnionOpt(ld->delayedResults.accepted, ld->delayedResults.lexemes, true);
+			if (oldAccepted)
+				pfree(oldAccepted);
+		}
 
-			ld->dictState.isend = (curVal->type == 0) ? true : false;
-			ld->dictState.getnext = false;
+		/*
+		 * Add accepted delayed results to the output of the parsing. All
+		 * lexemes returned during thesaurus phrase processing should be
+		 * returned simultaneously, since all phrase tokens are processed as
+		 * one.
+		 */
+		if (ld->delayedResults.accepted != NULL)
+		{
+			/*
+			 * Previous value of res should not be cleared, because it is
+			 * stored in LexemesBuffer
+			 */
+			res = TSLexemeUnionOpt(ld->delayedResults.accepted, res, prevIterationResult == NULL);
 
-			res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-															 &(dict->lexize),
-															 PointerGetDatum(dict->dictData),
-															 PointerGetDatum(curVal->lemm),
-															 Int32GetDatum(curVal->lenlemm),
-															 PointerGetDatum(&ld->dictState)
-															 ));
+			ResultStorageClearLexemes(&ld->delayedResults);
+			ResultStorageClearAccepted(&ld->delayedResults);
+		}
+		setCorrLex(ld, correspondLexem);
+	}
 
-			if (ld->dictState.getnext)
-			{
-				/* Dictionary wants one more */
-				ld->curSub = curVal->next;
-				if (res)
-					setNewTmpRes(ld, curVal, res);
-				continue;
-			}
+	if (resetSkipDictionary)
+		ld->skipDictionary = InvalidOid;
 
-			if (res || ld->tmpRes)
-			{
-				/*
-				 * Dictionary normalizes lexemes, so we remove from stack all
-				 * used lexemes, return to basic mode and redo end of stack
-				 * (if it exists)
-				 */
-				if (res)
-				{
-					moveToWaste(ld, ld->curSub);
-				}
-				else
-				{
-					res = ld->tmpRes;
-					moveToWaste(ld, ld->lastRes);
-				}
+	res = TSLexemeFilterMulti(res);
+	if (res)
+		res = TSLexemeRemoveDuplications(res);
 
-				/* reset to initial state */
-				ld->curDictId = InvalidOid;
-				ld->posDict = 0;
-				ld->lastRes = NULL;
-				ld->tmpRes = NULL;
-				setCorrLex(ld, correspondLexem);
-				return res;
-			}
+	/*
+	 * Copy result since it may be stored in LexemesBuffere and removed at the
+	 * next step.
+	 */
+	if (res)
+	{
+		TSLexeme   *oldRes = res;
+		int			resSize = TSLexemeGetSize(res);
 
-			/*
-			 * Dict don't want next lexem and didn't recognize anything, redo
-			 * from ld->towork.head
-			 */
-			ld->curDictId = InvalidOid;
-			return LexizeExec(ld, correspondLexem);
-		}
+		res = palloc0(sizeof(TSLexeme) * (resSize + 1));
+		memcpy(res, oldRes, sizeof(TSLexeme) * resSize);
 	}
 
-	setCorrLex(ld, correspondLexem);
-	return NULL;
+	LexemesBufferClear(&ld->buffer);
+	return res;
 }
 
+/*-------------------
+ * ts_parse API functions
+ *-------------------
+ */
+
 /*
  * Parse string and lexize words.
  *
@@ -357,7 +1473,7 @@ LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 void
 parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
@@ -375,36 +1491,42 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		while ((norms = LexizeExec(&ldata, NULL)) != NULL)
 		{
-			TSLexeme   *ptr = norms;
+			TSLexeme   *ptr;
+
+			ptr = norms;
 
 			prs->pos++;			/* set pos */
 
@@ -429,14 +1551,246 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 			}
 			pfree(norms);
 		}
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
 
+/*-------------------
+ * ts_debug and helper functions
+ *-------------------
+ */
+
+/*
+ * Free memory occupied by temporary TSMapElement
+ */
+
+static void
+ts_debug_free_rule(TSMapElement *element)
+{
+	if (element != NULL && element->type == TSMAP_EXPRESSION)
+	{
+		ts_debug_free_rule(element->value.objectExpression->left);
+		ts_debug_free_rule(element->value.objectExpression->right);
+		pfree(element->value.objectExpression);
+		pfree(element);
+	}
+}
+
+/*
+ * Initialize SRF context and text parser for ts_debug execution.
+ */
+static void
+ts_debug_init(Oid cfgId, text *inputText, FunctionCallInfo fcinfo)
+{
+	TupleDesc	tupdesc;
+	char	   *buf;
+	int			buflen;
+	FuncCallContext *funcctx;
+	MemoryContext oldcontext;
+	TSDebugContext *context;
+
+	funcctx = SRF_FIRSTCALL_INIT();
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+	buf = text_to_cstring(inputText);
+	buflen = strlen(buf);
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("function returning record called in context "
+						"that cannot accept type record")));
+
+	funcctx->user_fctx = palloc0(sizeof(TSDebugContext));
+	funcctx->attinmeta = TupleDescGetAttInMetadata(tupdesc);
+
+	context = funcctx->user_fctx;
+	context->cfg = lookup_ts_config_cache(cfgId);
+	context->prsobj = lookup_ts_parser_cache(context->cfg->prsId);
+
+	context->tokenTypes = (LexDescr *) DatumGetPointer(OidFunctionCall1(context->prsobj->lextypeOid,
+																		(Datum) 0));
+
+	context->prsdata = (void *) DatumGetPointer(FunctionCall2(&context->prsobj->prsstart,
+															  PointerGetDatum(buf),
+															  Int32GetDatum(buflen)));
+	LexizeInit(&context->ldata, context->cfg);
+	context->ldata.debugContext = true;
+	context->tokentype = 1;
+
+	MemoryContextSwitchTo(oldcontext);
+}
+
+/*
+ * Get one token from input text and add it to processing queue.
+ */
+static void
+ts_debug_get_token(FuncCallContext *funcctx)
+{
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+	int			lenlemm;
+	char	   *lemm = NULL;
+
+	context = funcctx->user_fctx;
+
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+	context->tokentype = DatumGetInt32(FunctionCall3(&(context->prsobj->prstoken),
+													 PointerGetDatum(context->prsdata),
+													 PointerGetDatum(&lemm),
+													 PointerGetDatum(&lenlemm)));
+
+	if (context->tokentype > 0 && lenlemm >= MAXSTRLEN)
+	{
+#ifdef IGNORE_LONGLEXEME
+		ereport(NOTICE,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#else
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#endif
+	}
+
+	LexizeAddLemm(&context->ldata, context->tokentype, lemm, lenlemm);
+	MemoryContextSwitchTo(oldcontext);
+}
+
 /*
+ * Parse text and print debug information, such as token type, dictionary map
+ * configuration, selected command and lexemes for each token.
+ * Arguments: regconfiguration(Oid) cfgId, text *inputText
+ */
+Datum
+ts_debug(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		Oid			cfgId = PG_GETARG_OID(0);
+		text	   *inputText = PG_GETARG_TEXT_P(1);
+
+		ts_debug_init(cfgId, inputText, fcinfo);
+	}
+
+	funcctx = SRF_PERCALL_SETUP();
+	context = funcctx->user_fctx;
+
+	while (context->tokentype > 0 && context->leftTokens == NULL)
+	{
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+		ts_debug_get_token(funcctx);
+
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	while (context->leftTokens == NULL && context->ldata.towork.head != NULL)
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+	if (context->leftTokens && context->leftTokens && context->leftTokens->type > 0)
+	{
+		HeapTuple	tuple;
+		Datum		result;
+		char	  **values;
+		ParsedLex  *lex = context->leftTokens;
+		StringInfo	str = NULL;
+		TSLexeme   *ptr;
+
+		values = palloc0(sizeof(char *) * 7);
+		str = makeStringInfo();
+		initStringInfo(str);
+
+		values[0] = context->tokenTypes[lex->type - 1].alias;
+		values[1] = context->tokenTypes[lex->type - 1].descr;
+
+		values[2] = palloc0(sizeof(char) * (lex->lenlemm + 1));
+		memcpy(values[2], lex->lemm, sizeof(char) * lex->lenlemm);
+
+		initStringInfo(str);
+		appendStringInfoChar(str, '{');
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			Oid *dictionaries = TSMapGetDictionaries(context->ldata.cfg->map[lex->type]);
+			Oid *currentDictionary = NULL;
+			for (currentDictionary = dictionaries; *currentDictionary != InvalidOid; currentDictionary++)
+			{
+				if (currentDictionary != dictionaries)
+					appendStringInfoChar(str, ',');
+
+				TSMapPrintDictName(*currentDictionary, str);
+			}
+		}
+		appendStringInfoChar(str, '}');
+		values[3] = str->data;
+
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			initStringInfo(str);
+			TSMapPrintElement(context->ldata.cfg->map[lex->type], str);
+			values[4] = str->data;
+
+			initStringInfo(str);
+			if (lex->relatedRule)
+			{
+				TSMapPrintElement(lex->relatedRule, str);
+				values[5] = str->data;
+				str = makeStringInfo();
+				initStringInfo(str);
+				ts_debug_free_rule(lex->relatedRule);
+				lex->relatedRule = NULL;
+			}
+		}
+
+		initStringInfo(str);
+		ptr = context->savedLexemes;
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '{');
+
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr != context->savedLexemes)
+				appendStringInfoString(str, ", ");
+			appendStringInfoString(str, ptr->lexeme);
+			ptr++;
+		}
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '}');
+		if (context->savedLexemes)
+			values[6] = str->data;
+		else
+			values[6] = NULL;
+
+		tuple = BuildTupleFromCStrings(funcctx->attinmeta, values);
+		result = HeapTupleGetDatum(tuple);
+
+		context->leftTokens = lex->next;
+		pfree(lex);
+		if (context->leftTokens == NULL && context->savedLexemes)
+			pfree(context->savedLexemes);
+
+		SRF_RETURN_NEXT(funcctx, result);
+	}
+
+	FunctionCall1(&(context->prsobj->prsend), PointerGetDatum(context->prsdata));
+	SRF_RETURN_DONE(funcctx);
+}
+
+/*-------------------
  * Headline framework
+ *-------------------
  */
+
 static void
 hladdword(HeadlineParsedText *prs, char *buf, int buflen, int type)
 {
@@ -532,12 +1886,12 @@ addHLParsedLex(HeadlineParsedText *prs, TSQuery query, ParsedLex *lexs, TSLexeme
 void
 hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
 	TSLexeme   *norms;
-	ParsedLex  *lexs;
+	ParsedLex  *lexs = NULL;
 	TSConfigCacheEntry *cfg;
 	TSParserCacheEntry *prsobj;
 	void	   *prsdata;
@@ -551,32 +1905,36 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		do
 		{
@@ -587,9 +1945,10 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 			}
 			else
 				addHLParsedLex(prs, query, lexs, NULL);
+			lexs = NULL;
 		} while (norms);
 
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
@@ -642,14 +2001,14 @@ generateHeadline(HeadlineParsedText *prs)
 			}
 			else if (!wrd->skip)
 			{
-				if (wrd->selected)
+				if (wrd->selected && (wrd == prs->words || !(wrd - 1)->selected))
 				{
 					memcpy(ptr, prs->startsel, prs->startsellen);
 					ptr += prs->startsellen;
 				}
 				memcpy(ptr, wrd->word, wrd->len);
 				ptr += wrd->len;
-				if (wrd->selected)
+				if (wrd->selected && ((wrd + 1 - prs->words) == prs->curwords || !(wrd + 1)->selected))
 				{
 					memcpy(ptr, prs->stopsel, prs->stopsellen);
 					ptr += prs->stopsellen;
diff --git a/src/backend/tsearch/ts_utils.c b/src/backend/tsearch/ts_utils.c
index f6e03ae..0dd846b 100644
--- a/src/backend/tsearch/ts_utils.c
+++ b/src/backend/tsearch/ts_utils.c
@@ -20,7 +20,6 @@
 #include "tsearch/ts_locale.h"
 #include "tsearch/ts_utils.h"
 
-
 /*
  * Given the base name and extension of a tsearch config file, return
  * its full path name.  The base name is assumed to be user-supplied,
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 2b38178..f251e83 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -828,11 +828,10 @@ static const struct cachedesc cacheinfo[] = {
 	},
 	{TSConfigMapRelationId,		/* TSCONFIGMAP */
 		TSConfigMapIndexId,
-		3,
+		2,
 		{
 			Anum_pg_ts_config_map_mapcfg,
 			Anum_pg_ts_config_map_maptokentype,
-			Anum_pg_ts_config_map_mapseqno,
 			0
 		},
 		2
diff --git a/src/backend/utils/cache/ts_cache.c b/src/backend/utils/cache/ts_cache.c
index 3d5c194..1ec3834 100644
--- a/src/backend/utils/cache/ts_cache.c
+++ b/src/backend/utils/cache/ts_cache.c
@@ -39,6 +39,7 @@
 #include "catalog/pg_ts_template.h"
 #include "commands/defrem.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/catcache.h"
 #include "utils/fmgroids.h"
@@ -51,13 +52,12 @@
 
 
 /*
- * MAXTOKENTYPE/MAXDICTSPERTT are arbitrary limits on the workspace size
+ * MAXTOKENTYPE is arbitrary limits on the workspace size
  * used in lookup_ts_config_cache().  We could avoid hardwiring a limit
  * by making the workspace dynamically enlargeable, but it seems unlikely
  * to be worth the trouble.
  */
-#define MAXTOKENTYPE	256
-#define MAXDICTSPERTT	100
+#define MAXTOKENTYPE		256
 
 
 static HTAB *TSParserCacheHash = NULL;
@@ -415,11 +415,10 @@ lookup_ts_config_cache(Oid cfgId)
 		ScanKeyData mapskey;
 		SysScanDesc mapscan;
 		HeapTuple	maptup;
-		ListDictionary maplists[MAXTOKENTYPE + 1];
-		Oid			mapdicts[MAXDICTSPERTT];
+		TSMapElement *mapconfigs[MAXTOKENTYPE + 1];
 		int			maxtokentype;
-		int			ndicts;
 		int			i;
+		TSMapElement *tmpConfig;
 
 		tp = SearchSysCache1(TSCONFIGOID, ObjectIdGetDatum(cfgId));
 		if (!HeapTupleIsValid(tp))
@@ -450,8 +449,8 @@ lookup_ts_config_cache(Oid cfgId)
 			if (entry->map)
 			{
 				for (i = 0; i < entry->lenmap; i++)
-					if (entry->map[i].dictIds)
-						pfree(entry->map[i].dictIds);
+					if (entry->map[i])
+						TSMapElementFree(entry->map[i]);
 				pfree(entry->map);
 			}
 		}
@@ -465,13 +464,11 @@ lookup_ts_config_cache(Oid cfgId)
 		/*
 		 * Scan pg_ts_config_map to gather dictionary list for each token type
 		 *
-		 * Because the index is on (mapcfg, maptokentype, mapseqno), we will
-		 * see the entries in maptokentype order, and in mapseqno order for
-		 * each token type, even though we didn't explicitly ask for that.
+		 * Because the index is on (mapcfg, maptokentype), we will see the
+		 * entries in maptokentype order even though we didn't explicitly ask
+		 * for that.
 		 */
-		MemSet(maplists, 0, sizeof(maplists));
 		maxtokentype = 0;
-		ndicts = 0;
 
 		ScanKeyInit(&mapskey,
 					Anum_pg_ts_config_map_mapcfg,
@@ -483,6 +480,7 @@ lookup_ts_config_cache(Oid cfgId)
 		mapscan = systable_beginscan_ordered(maprel, mapidx,
 											 NULL, 1, &mapskey);
 
+		memset(mapconfigs, 0, sizeof(mapconfigs));
 		while ((maptup = systable_getnext_ordered(mapscan, ForwardScanDirection)) != NULL)
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
@@ -492,51 +490,27 @@ lookup_ts_config_cache(Oid cfgId)
 				elog(ERROR, "maptokentype value %d is out of range", toktype);
 			if (toktype < maxtokentype)
 				elog(ERROR, "maptokentype entries are out of order");
-			if (toktype > maxtokentype)
-			{
-				/* starting a new token type, but first save the prior data */
-				if (ndicts > 0)
-				{
-					maplists[maxtokentype].len = ndicts;
-					maplists[maxtokentype].dictIds = (Oid *)
-						MemoryContextAlloc(CacheMemoryContext,
-										   sizeof(Oid) * ndicts);
-					memcpy(maplists[maxtokentype].dictIds, mapdicts,
-						   sizeof(Oid) * ndicts);
-				}
-				maxtokentype = toktype;
-				mapdicts[0] = cfgmap->mapdict;
-				ndicts = 1;
-			}
-			else
-			{
-				/* continuing data for current token type */
-				if (ndicts >= MAXDICTSPERTT)
-					elog(ERROR, "too many pg_ts_config_map entries for one token type");
-				mapdicts[ndicts++] = cfgmap->mapdict;
-			}
+
+			maxtokentype = toktype;
+			tmpConfig = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			mapconfigs[maxtokentype] = TSMapMoveToMemoryContext(tmpConfig, CacheMemoryContext);
+			TSMapElementFree(tmpConfig);
+			tmpConfig = NULL;
 		}
 
 		systable_endscan_ordered(mapscan);
 		index_close(mapidx, AccessShareLock);
 		heap_close(maprel, AccessShareLock);
 
-		if (ndicts > 0)
+		if (maxtokentype > 0)
 		{
-			/* save the last token type's dictionaries */
-			maplists[maxtokentype].len = ndicts;
-			maplists[maxtokentype].dictIds = (Oid *)
-				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(Oid) * ndicts);
-			memcpy(maplists[maxtokentype].dictIds, mapdicts,
-				   sizeof(Oid) * ndicts);
-			/* and save the overall map */
+			/* save the overall map */
 			entry->lenmap = maxtokentype + 1;
-			entry->map = (ListDictionary *)
+			entry->map = (TSMapElement * *)
 				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(ListDictionary) * entry->lenmap);
-			memcpy(entry->map, maplists,
-				   sizeof(ListDictionary) * entry->lenmap);
+								   sizeof(TSMapElement *) * entry->lenmap);
+			memcpy(entry->map, mapconfigs,
+				   sizeof(TSMapElement *) * entry->lenmap);
 		}
 
 		entry->isvalid = true;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 8ca83c0..6047e26 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -14468,15 +14468,29 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 	PQclear(res);
 
 	resetPQExpBuffer(query);
-	appendPQExpBuffer(query,
-					  "SELECT\n"
-					  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
-					  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
-					  "FROM pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE m.mapcfg = '%u'\n"
-					  "ORDER BY m.mapcfg, m.maptokentype, m.mapseqno",
-					  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+	
+	if (fout->remoteVersion >= 110000)
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+	else
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, m.mapseqno\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
 	ntups = PQntuples(res);
@@ -14489,20 +14503,14 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 		char	   *tokenname = PQgetvalue(res, i, i_tokenname);
 		char	   *dictname = PQgetvalue(res, i, i_dictname);
 
-		if (i == 0 ||
-			strcmp(tokenname, PQgetvalue(res, i - 1, i_tokenname)) != 0)
-		{
-			/* starting a new token type, so start a new command */
-			if (i > 0)
-				appendPQExpBufferStr(q, ";\n");
-			appendPQExpBuffer(q, "\nALTER TEXT SEARCH CONFIGURATION %s\n",
-							  fmtId(cfginfo->dobj.name));
-			/* tokenname needs quoting, dictname does NOT */
-			appendPQExpBuffer(q, "    ADD MAPPING FOR %s WITH %s",
-							  fmtId(tokenname), dictname);
-		}
-		else
-			appendPQExpBuffer(q, ", %s", dictname);
+		/* starting a new token type, so start a new command */
+		if (i > 0)
+			appendPQExpBufferStr(q, ";\n");
+		appendPQExpBuffer(q, "\nALTER TEXT SEARCH CONFIGURATION %s\n",
+						  fmtId(cfginfo->dobj.name));
+		/* tokenname needs quoting, dictname does NOT */
+		appendPQExpBuffer(q, "    ADD MAPPING FOR %s WITH %s",
+						  fmtId(tokenname), dictname);
 	}
 
 	if (ntups > 0)
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 466a780..2ea565d 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -4610,25 +4610,41 @@ describeOneTSConfig(const char *oid, const char *nspname, const char *cfgname,
 
 	initPQExpBuffer(&buf);
 
-	printfPQExpBuffer(&buf,
-					  "SELECT\n"
-					  "  ( SELECT t.alias FROM\n"
-					  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
-					  "  pg_catalog.btrim(\n"
-					  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
-					  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
-					  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
-					  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
-					  "    ) :: pg_catalog.text,\n"
-					  "  '{}') AS \"%s\"\n"
-					  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
-					  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
-					  "ORDER BY 1;",
-					  gettext_noop("Token"),
-					  gettext_noop("Dictionaries"),
-					  oid);
+	if (pset.sversion >= 110000)
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  " dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+	else
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  "  pg_catalog.btrim(\n"
+						  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
+						  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
+						  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
+						  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
+						  "    ) :: pg_catalog.text,\n"
+						  "  '{}') AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+
 
 	res = PSQLexec(buf.data);
 	termPQExpBuffer(&buf);
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index 0bb8754..1dd4938 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -260,7 +260,7 @@ DECLARE_UNIQUE_INDEX(pg_ts_config_cfgname_index, 3608, on pg_ts_config using btr
 DECLARE_UNIQUE_INDEX(pg_ts_config_oid_index, 3712, on pg_ts_config using btree(oid oid_ops));
 #define TSConfigOidIndexId	3712
 
-DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops, mapseqno int4_ops));
+DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops));
 #define TSConfigMapIndexId	3609
 
 DECLARE_UNIQUE_INDEX(pg_ts_dict_dictname_index, 3604, on pg_ts_dict using btree(dictname name_ops, dictnamespace oid_ops));
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index f01648c..201ef17 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -4925,6 +4925,12 @@ DESCR("transform jsonb to tsvector");
 DATA(insert OID = 4212 (  to_tsvector		PGNSP PGUID 12 100 0 0 0 f f f f t f i s 2 0 3614 "3734 114" _null_ _null_ _null_ _null_ _null_ json_to_tsvector_byid _null_ _null_ _null_ ));
 DESCR("transform json to tsvector");
 
+DATA(insert OID = 8891 (  dictionary_mapping_to_text	PGNSP PGUID 12 100 0 0 0 f f f f t f s s 2 0 25 "26 23" _null_ _null_ _null_ _null_ _null_ dictionary_mapping_to_text _null_ _null_ _null_ ));
+DESCR("returns text representation of dictionary configuration map");
+
+DATA(insert OID = 8892 (  ts_debug			PGNSP PGUID 12 100 1 0 0 f f f f t t s s 2 0 2249 "3734 25" "{3734,25,25,25,25,3770,25,25,1009}" "{i,i,o,o,o,o,o,o,o}" "{cfgId,inputText,alias,description,token,dictionaries,configuration,command,lexemes}" _null_ _null_ ts_debug _null_ _null_ _null_));
+DESCR("debug function for text search configuration");
+
 DATA(insert OID = 3752 (  tsvector_update_trigger			PGNSP PGUID 12 1 0 0 0 f f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_byid _null_ _null_ _null_ ));
 DESCR("trigger for automatic update of tsvector column");
 DATA(insert OID = 3753 (  tsvector_update_trigger_column	PGNSP PGUID 12 1 0 0 0 f f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_bycolumn _null_ _null_ _null_ ));
diff --git a/src/include/catalog/pg_ts_config_map.h b/src/include/catalog/pg_ts_config_map.h
index a3d9e3f..6bcd44a 100644
--- a/src/include/catalog/pg_ts_config_map.h
+++ b/src/include/catalog/pg_ts_config_map.h
@@ -22,6 +22,7 @@
 #define PG_TS_CONFIG_MAP_H
 
 #include "catalog/genbki.h"
+#include "utils/jsonb.h"
 
 /* ----------------
  *		pg_ts_config_map definition.  cpp turns this into
@@ -30,49 +31,109 @@
  */
 #define TSConfigMapRelationId	3603
 
+/*
+ * Create a typedef in order to use same type name in
+ * generated DB initialization script and C source code
+ */
+typedef Jsonb jsonb;
+
 CATALOG(pg_ts_config_map,3603) BKI_WITHOUT_OIDS
 {
 	Oid			mapcfg;			/* OID of configuration owning this entry */
 	int32		maptokentype;	/* token type from parser */
-	int32		mapseqno;		/* order in which to consult dictionaries */
-	Oid			mapdict;		/* dictionary to consult */
+	jsonb		mapdicts;		/* dictionary map Jsonb representation */
 } FormData_pg_ts_config_map;
 
 typedef FormData_pg_ts_config_map *Form_pg_ts_config_map;
 
+/*
+ * Element of the mapping expression tree
+ */
+typedef struct TSMapElement
+{
+	int			type; /* Type of the element */
+	union
+	{
+		struct TSMapExpression *objectExpression;
+		struct TSMapCase *objectCase;
+		Oid			objectDictionary;
+		void	   *object;
+	} value;
+	struct TSMapElement *parent; /* Parent in the expression tree */
+} TSMapElement;
+
+/*
+ * Representation of expression with operator and two operands
+ */
+typedef struct TSMapExpression
+{
+	int			operator;
+	TSMapElement *left;
+	TSMapElement *right;
+} TSMapExpression;
+
+/*
+ * Representation of CASE structure inside database
+ */
+typedef struct TSMapCase
+{
+	TSMapElement *condition;
+	TSMapElement *command;
+	TSMapElement *elsebranch;
+	bool		match;	/* If false, NO MATCH is used */
+} TSMapCase;
+
 /* ----------------
- *		compiler constants for pg_ts_config_map
+ *		Compiler constants for pg_ts_config_map
  * ----------------
  */
-#define Natts_pg_ts_config_map				4
+#define Natts_pg_ts_config_map				3
 #define Anum_pg_ts_config_map_mapcfg		1
 #define Anum_pg_ts_config_map_maptokentype	2
-#define Anum_pg_ts_config_map_mapseqno		3
-#define Anum_pg_ts_config_map_mapdict		4
+#define Anum_pg_ts_config_map_mapdicts		3
+
+/* ----------------
+ *		Dictionary map operators
+ * ----------------
+ */
+#define TSMAP_OP_MAP			1
+#define TSMAP_OP_UNION			2
+#define TSMAP_OP_EXCEPT			3
+#define TSMAP_OP_INTERSECT		4
+#define TSMAP_OP_COMMA			5
+
+/* ----------------
+ *		TSMapElement object types
+ * ----------------
+ */
+#define TSMAP_EXPRESSION	1
+#define TSMAP_CASE			2
+#define TSMAP_DICTIONARY	3
+#define TSMAP_KEEP			4
 
 /* ----------------
  *		initial contents of pg_ts_config_map
  * ----------------
  */
 
-DATA(insert ( 3748	1	1	3765 ));
-DATA(insert ( 3748	2	1	3765 ));
-DATA(insert ( 3748	3	1	3765 ));
-DATA(insert ( 3748	4	1	3765 ));
-DATA(insert ( 3748	5	1	3765 ));
-DATA(insert ( 3748	6	1	3765 ));
-DATA(insert ( 3748	7	1	3765 ));
-DATA(insert ( 3748	8	1	3765 ));
-DATA(insert ( 3748	9	1	3765 ));
-DATA(insert ( 3748	10	1	3765 ));
-DATA(insert ( 3748	11	1	3765 ));
-DATA(insert ( 3748	15	1	3765 ));
-DATA(insert ( 3748	16	1	3765 ));
-DATA(insert ( 3748	17	1	3765 ));
-DATA(insert ( 3748	18	1	3765 ));
-DATA(insert ( 3748	19	1	3765 ));
-DATA(insert ( 3748	20	1	3765 ));
-DATA(insert ( 3748	21	1	3765 ));
-DATA(insert ( 3748	22	1	3765 ));
+DATA(insert ( 3748	1	"[3765]" ));
+DATA(insert ( 3748	2	"[3765]" ));
+DATA(insert ( 3748	3	"[3765]" ));
+DATA(insert ( 3748	4	"[3765]" ));
+DATA(insert ( 3748	5	"[3765]" ));
+DATA(insert ( 3748	6	"[3765]" ));
+DATA(insert ( 3748	7	"[3765]" ));
+DATA(insert ( 3748	8	"[3765]" ));
+DATA(insert ( 3748	9	"[3765]" ));
+DATA(insert ( 3748	10	"[3765]" ));
+DATA(insert ( 3748	11	"[3765]" ));
+DATA(insert ( 3748	15	"[3765]" ));
+DATA(insert ( 3748	16	"[3765]" ));
+DATA(insert ( 3748	17	"[3765]" ));
+DATA(insert ( 3748	18	"[3765]" ));
+DATA(insert ( 3748	19	"[3765]" ));
+DATA(insert ( 3748	20	"[3765]" ));
+DATA(insert ( 3748	21	"[3765]" ));
+DATA(insert ( 3748	22	"[3765]" ));
 
 #endif							/* PG_TS_CONFIG_MAP_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 74b094a..23eef6a 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -381,6 +381,9 @@ typedef enum NodeTag
 	T_CreateEnumStmt,
 	T_CreateRangeStmt,
 	T_AlterEnumStmt,
+	T_DictMapExprElem,
+	T_DictMapElem,
+	T_DictMapCase,
 	T_AlterTSDictionaryStmt,
 	T_AlterTSConfigurationStmt,
 	T_CreateFdwStmt,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 76a73b2..2fbeda9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3384,6 +3384,50 @@ typedef enum AlterTSConfigType
 	ALTER_TSCONFIG_DROP_MAPPING
 } AlterTSConfigType;
 
+/*
+ * TS Configuration expression tree element's types
+ */
+typedef enum DictMapElemType
+{
+	DICT_MAP_CASE,
+	DICT_MAP_EXPRESSION,
+	DICT_MAP_KEEP,
+	DICT_MAP_DICTIONARY
+} DictMapElemType;
+
+/*
+ * TS Configuration expression tree abstract element
+ */
+typedef struct DictMapElem
+{
+	NodeTag		type;
+	int8		kind;			/* See DictMapElemType */
+	void	   *data;			/* Type should be detected by kind value */
+} DictMapElem;
+
+/*
+ * TS Configuration expression tree element with operator and operands
+ */
+typedef struct DictMapExprElem
+{
+	NodeTag		type;
+	DictMapElem *left;
+	DictMapElem *right;
+	int8		oper;
+} DictMapExprElem;
+
+/*
+ * TS Configuration expression tree CASE element
+ */
+typedef struct DictMapCase
+{
+	NodeTag		type;
+	struct DictMapElem *condition;
+	struct DictMapElem *command;
+	struct DictMapElem *elsebranch;
+	bool		match;
+} DictMapCase;
+
 typedef struct AlterTSConfigurationStmt
 {
 	NodeTag		type;
@@ -3396,6 +3440,7 @@ typedef struct AlterTSConfigurationStmt
 	 */
 	List	   *tokentype;		/* list of Value strings */
 	List	   *dicts;			/* list of list of Value strings */
+	DictMapElem *dict_map;		/* tree of the mapping expression */
 	bool		override;		/* if true - remove old variant */
 	bool		replace;		/* if true - replace dictionary by another */
 	bool		missing_ok;		/* for DROP - skip error if missing? */
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 26af944..f56af7e 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -219,6 +219,7 @@ PG_KEYWORD("is", IS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isnull", ISNULL, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isolation", ISOLATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("join", JOIN, TYPE_FUNC_NAME_KEYWORD)
+PG_KEYWORD("keep", KEEP, RESERVED_KEYWORD)
 PG_KEYWORD("key", KEY, UNRESERVED_KEYWORD)
 PG_KEYWORD("label", LABEL, UNRESERVED_KEYWORD)
 PG_KEYWORD("language", LANGUAGE, UNRESERVED_KEYWORD)
@@ -241,6 +242,7 @@ PG_KEYWORD("location", LOCATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("lock", LOCK_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("locked", LOCKED, UNRESERVED_KEYWORD)
 PG_KEYWORD("logged", LOGGED, UNRESERVED_KEYWORD)
+PG_KEYWORD("map", MAP, UNRESERVED_KEYWORD)
 PG_KEYWORD("mapping", MAPPING, UNRESERVED_KEYWORD)
 PG_KEYWORD("match", MATCH, UNRESERVED_KEYWORD)
 PG_KEYWORD("materialized", MATERIALIZED, UNRESERVED_KEYWORD)
diff --git a/src/include/tsearch/ts_cache.h b/src/include/tsearch/ts_cache.h
index 410f1d5..4633dd7 100644
--- a/src/include/tsearch/ts_cache.h
+++ b/src/include/tsearch/ts_cache.h
@@ -14,6 +14,7 @@
 #define TS_CACHE_H
 
 #include "utils/guc.h"
+#include "catalog/pg_ts_config_map.h"
 
 
 /*
@@ -66,6 +67,7 @@ typedef struct
 {
 	int			len;
 	Oid		   *dictIds;
+	int32	   *dictOptions;
 } ListDictionary;
 
 typedef struct
@@ -77,7 +79,7 @@ typedef struct
 	Oid			prsId;
 
 	int			lenmap;
-	ListDictionary *map;
+	TSMapElement **map;
 } TSConfigCacheEntry;
 
 
diff --git a/src/include/tsearch/ts_configmap.h b/src/include/tsearch/ts_configmap.h
new file mode 100644
index 0000000..79e6180
--- /dev/null
+++ b/src/include/tsearch/ts_configmap.h
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.h
+ *	  internal representation of text search configuration and utilities for it
+ *
+ * Copyright (c) 1998-2018, PostgreSQL Global Development Group
+ *
+ * src/include/tsearch/ts_utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PG_TS_CONFIGMAP_H_
+#define _PG_TS_CONFIGMAP_H_
+
+#include "utils/jsonb.h"
+#include "catalog/pg_ts_config_map.h"
+
+/*
+ * Configuration storage functions
+ * Provide interface to convert ts_configuration into JSONB and vice versa
+ */
+
+/* Convert TSMapElement structure into JSONB */
+extern Jsonb *TSMapToJsonb(TSMapElement *config);
+
+/* Extract TSMapElement from JSONB formated data */
+extern TSMapElement *JsonbToTSMap(Jsonb *json);
+/* Replace all occurances of oldDict by newDict */
+extern void TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict);
+
+/* Move rule list into specified memory context */
+extern TSMapElement *TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context);
+/* Free all nodes of the rule list */
+extern void TSMapElementFree(TSMapElement *element);
+
+/* Print map in human-readable format */
+extern void TSMapPrintElement(TSMapElement *config, StringInfo result);
+
+/* Print dictionary name for a given Oid */
+extern void TSMapPrintDictName(Oid dictId, StringInfo result);
+
+/* Return all dictionaries used in config */
+extern Oid *TSMapGetDictionaries(TSMapElement *config);
+
+/* Do a deep comparison of two TSMapElements. Doesn't check parents of elements */
+extern bool TSMapElementEquals(TSMapElement *a, TSMapElement *b);
+
+#endif							/* _PG_TS_CONFIGMAP_H_ */
diff --git a/src/include/tsearch/ts_public.h b/src/include/tsearch/ts_public.h
index 0b7a5aa..d970eec 100644
--- a/src/include/tsearch/ts_public.h
+++ b/src/include/tsearch/ts_public.h
@@ -115,6 +115,7 @@ typedef struct
 #define TSL_ADDPOS		0x01
 #define TSL_PREFIX		0x02
 #define TSL_FILTER		0x04
+#define TSL_MULTI		0x08
 
 /*
  * Struct for supporting complex dictionaries like thesaurus.
diff --git a/src/test/regress/expected/oidjoins.out b/src/test/regress/expected/oidjoins.out
index 234b44f..40029f3 100644
--- a/src/test/regress/expected/oidjoins.out
+++ b/src/test/regress/expected/oidjoins.out
@@ -1081,14 +1081,6 @@ WHERE	mapcfg != 0 AND
 ------+--------
 (0 rows)
 
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
- ctid | mapdict 
-------+---------
-(0 rows)
-
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/expected/tsdicts.out b/src/test/regress/expected/tsdicts.out
index 0c1d7c7..04ac38b 100644
--- a/src/test/regress/expected/tsdicts.out
+++ b/src/test/regress/expected/tsdicts.out
@@ -420,6 +420,105 @@ SELECT ts_lexize('thesaurus', 'one');
  {1}
 (1 row)
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+SELECT to_tsvector('english_union', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'books');
+    to_tsvector     
+--------------------
+ 'book':1 'books':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+SELECT to_tsvector('english_intersect', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'books');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'booking');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+SELECT to_tsvector('english_except', 'book');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_except', 'books');
+ to_tsvector 
+-------------
+ 'books':1
+(1 row)
+
+SELECT to_tsvector('english_except', 'booking');
+ to_tsvector 
+-------------
+ 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+SELECT to_tsvector('english_branches', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -580,6 +679,153 @@ SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a
  'card':3,10 'invit':2,9 'like':6 'look':5 'order':1,8
 (1 row)
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                                     to_tsvector                                      
+--------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+              to_tsvector              
+---------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                     to_tsvector                     
+-----------------------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5 'supernova':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                              to_tsvector                               
+------------------------------------------------------------------------
+ '1987a':6 'mysterious':2 'of':4 'rings':3 'sn':5 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+            Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |                     Dictionaries                      
+-----------------+-------------------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN MATCH THEN simple UNION thesaurus+
+                 | ELSE simple                                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+      to_tsvector       
+------------------------
+ '12':1 'one':1 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+            to_tsvector            
+-----------------------------------
+ '123':1 'one':1 'three':3 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+           to_tsvector           
+---------------------------------
+ '12':1 'four':3 'one':1 'two':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+      Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |               Dictionaries               
+-----------------+------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN NO MATCH THEN simple+
+                 | ELSE thesaurus                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector    
+------------------
+ '12':1 'books':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector   
+-----------------
+ '12':1 'book':2
+(1 row)
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/expected/tsearch.out b/src/test/regress/expected/tsearch.out
index d63fb12..c0e9fc5 100644
--- a/src/test/regress/expected/tsearch.out
+++ b/src/test/regress/expected/tsearch.out
@@ -36,11 +36,11 @@ WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 -----+---------
 (0 rows)
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
- mapcfg | maptokentype | mapseqno 
---------+--------------+----------
+WHERE mapcfg = 0;
+ mapcfg | maptokentype 
+--------+--------------
 (0 rows)
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
@@ -51,8 +51,8 @@ RIGHT JOIN pg_ts_config_map AS m
     ON (tt.cfgid=m.mapcfg AND tt.tokid=m.maptokentype)
 WHERE
     tt.cfgid IS NULL OR tt.tokid IS NULL;
- cfgid | tokid | mapcfg | maptokentype | mapseqno | mapdict 
--------+-------+--------+--------------+----------+---------
+ cfgid | tokid | mapcfg | maptokentype | mapdicts 
+-------+-------+--------+--------------+----------
 (0 rows)
 
 -- test basic text search behavior without indexes, then with
@@ -567,55 +567,55 @@ SELECT length(to_tsvector('english', '345 qwe@efd.r '' http://www.com/ http://ae
 
 -- ts_debug
 SELECT * from ts_debug('english', '<myns:foo-bar_baz.blurfl>abc&nm1;def&#xa9;ghi&#245;jkl</myns:foo-bar_baz.blurfl>');
-   alias   |   description   |           token            |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+----------------------------+----------------+--------------+---------
- tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |              | 
- asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem | {abc}
- entity    | XML entity      | &nm1;                      | {}             |              | 
- asciiword | Word, all ASCII | def                        | {english_stem} | english_stem | {def}
- entity    | XML entity      | &#xa9;                     | {}             |              | 
- asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem | {ghi}
- entity    | XML entity      | &#245;                     | {}             |              | 
- asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem | {jkl}
- tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |              | 
+   alias   |   description   |           token            |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+----------------------------+----------------+---------------+--------------+---------
+ tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |               |              | 
+ asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem  | english_stem | {abc}
+ entity    | XML entity      | &nm1;                      | {}             |               |              | 
+ asciiword | Word, all ASCII | def                        | {english_stem} | english_stem  | english_stem | {def}
+ entity    | XML entity      | &#xa9;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem  | english_stem | {ghi}
+ entity    | XML entity      | &#245;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem  | english_stem | {jkl}
+ tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |               |              | 
 (9 rows)
 
 -- check parsing of URLs
 SELECT * from ts_debug('english', 'http://www.harewoodsolutions.co.uk/press.aspx</span>');
-  alias   |  description  |                 token                  | dictionaries | dictionary |                 lexemes                  
-----------+---------------+----------------------------------------+--------------+------------+------------------------------------------
- protocol | Protocol head | http://                                | {}           |            | 
- url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple     | {www.harewoodsolutions.co.uk/press.aspx}
- host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple     | {www.harewoodsolutions.co.uk}
- url_path | URL path      | /press.aspx                            | {simple}     | simple     | {/press.aspx}
- tag      | XML tag       | </span>                                | {}           |            | 
+  alias   |  description  |                 token                  | dictionaries | configuration | command |                 lexemes                  
+----------+---------------+----------------------------------------+--------------+---------------+---------+------------------------------------------
+ protocol | Protocol head | http://                                | {}           |               |         | 
+ url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk/press.aspx}
+ host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk}
+ url_path | URL path      | /press.aspx                            | {simple}     | simple        | simple  | {/press.aspx}
+ tag      | XML tag       | </span>                                | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
-  alias   |  description  |           token            | dictionaries | dictionary |           lexemes            
-----------+---------------+----------------------------+--------------+------------+------------------------------
- protocol | Protocol head | http://                    | {}           |            | 
- url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple     | {aew.wer0c.ewr/id?ad=qwe&dw}
- host     | Host          | aew.wer0c.ewr              | {simple}     | simple     | {aew.wer0c.ewr}
- url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple     | {/id?ad=qwe&dw}
- tag      | XML tag       | <span>                     | {}           |            | 
+  alias   |  description  |           token            | dictionaries | configuration | command |           lexemes            
+----------+---------------+----------------------------+--------------+---------------+---------+------------------------------
+ protocol | Protocol head | http://                    | {}           |               |         | 
+ url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple        | simple  | {aew.wer0c.ewr/id?ad=qwe&dw}
+ host     | Host          | aew.wer0c.ewr              | {simple}     | simple        | simple  | {aew.wer0c.ewr}
+ url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple        | simple  | {/id?ad=qwe&dw}
+ tag      | XML tag       | <span>                     | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
-  alias   |  description  |        token         | dictionaries | dictionary |        lexemes         
-----------+---------------+----------------------+--------------+------------+------------------------
- protocol | Protocol head | http://              | {}           |            | 
- url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple     | {5aew.werc.ewr:8100/?}
- host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path      | /?                   | {simple}     | simple     | {/?}
+  alias   |  description  |        token         | dictionaries | configuration | command |        lexemes         
+----------+---------------+----------------------+--------------+---------------+---------+------------------------
+ protocol | Protocol head | http://              | {}           |               |         | 
+ url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?}
+ host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path      | /?                   | {simple}     | simple        | simple  | {/?}
 (4 rows)
 
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
-  alias   | description |         token          | dictionaries | dictionary |         lexemes          
-----------+-------------+------------------------+--------------+------------+--------------------------
- url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple     | {5aew.werc.ewr:8100/?xx}
- host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path    | /?xx                   | {simple}     | simple     | {/?xx}
+  alias   | description |         token          | dictionaries | configuration | command |         lexemes          
+----------+-------------+------------------------+--------------+---------------+---------+--------------------------
+ url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?xx}
+ host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path    | /?xx                   | {simple}     | simple        | simple  | {/?xx}
 (3 rows)
 
 SELECT token, alias,
diff --git a/src/test/regress/sql/oidjoins.sql b/src/test/regress/sql/oidjoins.sql
index fcf9990..320e220 100644
--- a/src/test/regress/sql/oidjoins.sql
+++ b/src/test/regress/sql/oidjoins.sql
@@ -541,10 +541,6 @@ SELECT	ctid, mapcfg
 FROM	pg_catalog.pg_ts_config_map fk
 WHERE	mapcfg != 0 AND
 	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_config pk WHERE pk.oid = fk.mapcfg);
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/sql/tsdicts.sql b/src/test/regress/sql/tsdicts.sql
index 1633c0d..8662820 100644
--- a/src/test/regress/sql/tsdicts.sql
+++ b/src/test/regress/sql/tsdicts.sql
@@ -117,6 +117,57 @@ CREATE TEXT SEARCH DICTIONARY thesaurus (
 
 SELECT ts_lexize('thesaurus', 'one');
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+
+SELECT to_tsvector('english_union', 'book');
+SELECT to_tsvector('english_union', 'books');
+SELECT to_tsvector('english_union', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+
+SELECT to_tsvector('english_intersect', 'book');
+SELECT to_tsvector('english_intersect', 'books');
+SELECT to_tsvector('english_intersect', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+
+SELECT to_tsvector('english_except', 'book');
+SELECT to_tsvector('english_except', 'books');
+SELECT to_tsvector('english_except', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+
+SELECT to_tsvector('english_branches', 'book');
+SELECT to_tsvector('english_branches', 'books');
+SELECT to_tsvector('english_branches', 'booking');
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -189,6 +240,43 @@ SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one');
 SELECT to_tsvector('thesaurus_tst', 'Supernovae star is very new star and usually called supernovae (abbreviation SN)');
 SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a tickets');
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/sql/tsearch.sql b/src/test/regress/sql/tsearch.sql
index 1c8520b..6f8af63 100644
--- a/src/test/regress/sql/tsearch.sql
+++ b/src/test/regress/sql/tsearch.sql
@@ -26,9 +26,9 @@ SELECT oid, cfgname
 FROM pg_ts_config
 WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
+WHERE mapcfg = 0;
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
 SELECT * FROM
#18Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Aleksandr Parfenov (#17)
1 attachment(s)
Re: Flexible configuration for full-text search

I have found an issue in grammar which doesn't allow to construct
complex expressions (usage of CASEs as operands) without parentheses.

I fixed and simplified a grammar a little bit. The rest of the patch is
the same.

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

Attachments:

0001-flexible-fts-configuration-v8.patchtext/x-patchDownload
diff --git a/contrib/unaccent/expected/unaccent.out b/contrib/unaccent/expected/unaccent.out
index b93105e..37b9337 100644
--- a/contrib/unaccent/expected/unaccent.out
+++ b/contrib/unaccent/expected/unaccent.out
@@ -61,3 +61,14 @@ SELECT ts_lexize('unaccent', '
  {����}
 (1 row)
 
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
+         to_tsvector          
+------------------------------
+ 'foobar':1 '�����':2 '���':3
+(1 row)
+
diff --git a/contrib/unaccent/sql/unaccent.sql b/contrib/unaccent/sql/unaccent.sql
index 3102139..6ce21cd 100644
--- a/contrib/unaccent/sql/unaccent.sql
+++ b/contrib/unaccent/sql/unaccent.sql
@@ -2,7 +2,6 @@ CREATE EXTENSION unaccent;
 
 -- must have a UTF8 database
 SELECT getdatabaseencoding();
-
 SET client_encoding TO 'KOI8';
 
 SELECT unaccent('foobar');
@@ -16,3 +15,12 @@ SELECT unaccent('unaccent', '
 SELECT ts_lexize('unaccent', 'foobar');
 SELECT ts_lexize('unaccent', '����');
 SELECT ts_lexize('unaccent', '����');
+
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml
index ebe0b94..ecc3704 100644
--- a/doc/src/sgml/ref/alter_tsconfig.sgml
+++ b/doc/src/sgml/ref/alter_tsconfig.sgml
@@ -22,8 +22,12 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING REPLACE <replaceable class="parameter">old_dictionary</replaceable> WITH <replaceable class="parameter">new_dictionary</replaceable>
@@ -89,6 +93,17 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
    </varlistentry>
 
    <varlistentry>
+    <term><replaceable class="parameter">config</replaceable></term>
+    <listitem>
+     <para>
+      The dictionaries tree expression. The dictionary expression
+      is a triple of condition/command/else that define way to process
+      the text. The <literal>ELSE</literal> part is optional.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry>
     <term><replaceable class="parameter">old_dictionary</replaceable></term>
     <listitem>
      <para>
@@ -133,7 +148,7 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
      </para>
     </listitem>
    </varlistentry>
- </variablelist>
+  </variablelist>
 
   <para>
    The <literal>ADD MAPPING FOR</literal> form installs a list of dictionaries to be
@@ -155,6 +170,53 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
  </refsect1>
 
  <refsect1>
+  <title>Dictionaries Map Configuration</title>
+
+  <refsect2>
+   <title>Format</title>
+   <para>
+    Formally <replaceable class="parameter">config</replaceable> is one of:
+   </para>
+   <programlisting>
+    * dictionary_name
+
+    * config { UNION | INTERSECT | EXCEPT | MAP } config
+
+    * CASE config
+        WHEN [ NO ] MATCH THEN { KEEP | config }
+        [ ELSE config ]
+      END
+   </programlisting>
+  </refsect2>
+
+  <refsect2>
+   <title>Description</title>
+   <para>
+    <replaceable class="parameter">config</replaceable> can be used
+    in three different formats. The most simple format is name of dictionary to
+    use for tokens processing.
+   </para>
+   <para>
+    In order to use more than one dictionary
+    simultaneously user should interconnect dictionaries by operators. Operators
+    <literal>UNION</literal>, <literal>EXCEPT</literal> and
+    <literal>INTERSECT</literal> have same meaning as in operations on sets.
+    Special operator <literal>MAP</literal> gets output of left subexpression
+    and uses it as an input to right subexpression.
+   </para>
+   <para>
+    The third format of <replaceable class="parameter">config</replaceable> is similar to
+    <literal>CASE/WHEN/THEN/ELSE</literal> structure. It's consists of three
+    replaceable parts. First one is configuration which is used to construct lexemes set
+    for matching condition. If the condition is triggered, the command is executed.
+    Use command <literal>KEEP</literal> to avoid repeating of the same
+    configuration in condition and command part. However, command may differ from
+    the condition. The <literal>ELSE</literal> branch is executed otherwise.
+   </para>
+  </refsect2>
+ </refsect1>
+
+ <refsect1>
   <title>Examples</title>
 
   <para>
@@ -167,6 +229,34 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
 ALTER TEXT SEARCH CONFIGURATION my_config
   ALTER MAPPING REPLACE english WITH swedish;
 </programlisting>
+
+  <para>
+   Next example shows how to analyse documents in both English and German languages.
+   <literal>english_hunspell</literal> and <literal>german_hunspell</literal>
+   return result only if a word is recognized. Otherwise, stemmer dictionaries
+   are used to process a token.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH
+   CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+    UNION
+   CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+    In order to combine search for both exact and processed forms the vector
+    should contain lexemes produced by <literal>simple</literal> for exact form
+    of the word as well as lexemes produced by linguistic-aware dictionary
+    (e.g. <literal>english_stem</literal>) for processed forms.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml
index 610b7bf..1253b41 100644
--- a/doc/src/sgml/textsearch.sgml
+++ b/doc/src/sgml/textsearch.sgml
@@ -732,10 +732,11 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     The <function>to_tsvector</function> function internally calls a parser
     which breaks the document text into tokens and assigns a type to
     each token.  For each token, a list of
-    dictionaries (<xref linkend="textsearch-dictionaries"/>) is consulted,
-    where the list can vary depending on the token type.  The first dictionary
-    that <firstterm>recognizes</firstterm> the token emits one or more normalized
-    <firstterm>lexemes</firstterm> to represent the token.  For example,
+    condition/command pairs is consulted, where the list can vary depending
+    on the token type, condition and command are expressions on dictionaries
+    with matching clause in condition(<xref linkend="textsearch-dictionaries"/>).
+    The first command combined with true-resulted condition emits one or more normalized
+    <firstterm>lexemes</firstterm> to represent the token. For example,
     <literal>rats</literal> became <literal>rat</literal> because one of the
     dictionaries recognized that the word <literal>rats</literal> is a plural
     form of <literal>rat</literal>.  Some words are recognized as
@@ -743,7 +744,7 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     causes them to be ignored since they occur too frequently to be useful in
     searching.  In our example these are
     <literal>a</literal>, <literal>on</literal>, and <literal>it</literal>.
-    If no dictionary in the list recognizes the token then it is also ignored.
+    If none of conditions is <literal>true</literal> the token is ignored.
     In this example that happened to the punctuation sign <literal>-</literal>
     because there are in fact no dictionaries assigned for its token type
     (<literal>Space symbols</literal>), meaning space tokens will never be
@@ -2232,8 +2233,8 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
      <para>
       a single lexeme with the <literal>TSL_FILTER</literal> flag set, to replace
       the original token with a new token to be passed to subsequent
-      dictionaries (a dictionary that does this is called a
-      <firstterm>filtering dictionary</firstterm>)
+      dictionaries in a comma-separated syntax (a dictionary that does this
+      is called a <firstterm>filtering dictionary</firstterm>)
      </para>
     </listitem>
     <listitem>
@@ -2265,38 +2266,126 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
    type that the parser can return, a separate list of dictionaries is
    specified by the configuration.  When a token of that type is found
    by the parser, each dictionary in the list is consulted in turn,
-   until some dictionary recognizes it as a known word.  If it is identified
-   as a stop word, or if no dictionary recognizes the token, it will be
-   discarded and not indexed or searched for.
-   Normally, the first dictionary that returns a non-<literal>NULL</literal>
-   output determines the result, and any remaining dictionaries are not
-   consulted; but a filtering dictionary can replace the given word
-   with a modified word, which is then passed to subsequent dictionaries.
+   until command is not selected based on its condition. If none of cases is
+   selected token will be discarded and not indexed or searched for.
   </para>
 
   <para>
-   The general rule for configuring a list of dictionaries
-   is to place first the most narrow, most specific dictionary, then the more
-   general dictionaries, finishing with a very general dictionary, like
+   A tree of cases is described as condition/command/else triples. Each
+   condition is evaluated in order to select appropriate command to generate
+   resulted set of lexemes.
+  </para>
+
+  <para>
+   A condition is an expression with dictionaries used as operands and
+   basic set operators <literal>UNION</literal>, <literal>EXCEPT</literal>, <literal>INTERSECT</literal>
+   and special operator <literal>MAP</literal>.
+   Special operator <literal>MAP</literal> use output of left subexpression as
+   input for right subexpression.
+  </para>
+
+  <para>
+    Rules to write command are same as for condition with additional keyword
+    <literal>KEEP</literal> considered to use the result of the condition as an output.
+  </para>
+
+  <para>
+   A comma-separated list of dictionaries is a simplified variant of text
+   search configuration. Each dictionary consulted to process a token and first
+   non-<literal>NULL</literal> output is accepted as a processing result.
+  </para>
+
+  <para>
+   The general rule for configuring tokens processing
+   is to place first case with the most narrow, most specific dictionary, then the more
+   general dictionaries, finishing with a very general dictionaries, like
    a <application>Snowball</application> stemmer or <literal>simple</literal>, which
-   recognizes everything.  For example, for an astronomy-specific search
+   recognizes everything. For example, for an astronomy-specific search
    (<literal>astro_en</literal> configuration) one could bind token type
    <type>asciiword</type> (ASCII word) to a synonym dictionary of astronomical
    terms, a general English dictionary and a <application>Snowball</application> English
-   stemmer:
+   stemmer in comma-separated variant of mapping:
+  </para>
 
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION astro_en
     ADD MAPPING FOR asciiword WITH astrosyn, english_ispell, english_stem;
 </programlisting>
+
+  <para>
+   Another example is a configuration for both English and German languages via
+   operator-separated variant of mapping:
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION multi_en_de
+    ADD MAPPING FOR asciiword, word WITH
+        CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+         UNION
+        CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+   This configuration provides an ability to search on collection of multilingual
+   documents without specifying language:
+  </para>
+
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'lack');
+ id |                   txt
+----+-----------------------------------------
+  2 | with old stars and lacking gas and dust
+
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'beginnen');
+ id |                 txt
+----+-------------------------------------
+  1 | Das geschah zu Beginn dieses Monats
+</programlisting>
+
+  <para>
+   A combination of stemmer dictionary with <literal>simple</literal> one may be used to mix
+   search for exact form of one word and linguistic search for others.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION exact_and_linguistic
+    ADD MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
+  <para>
+   In the following example a simple dictionary is used to prevent words from normalization in query.
   </para>
 
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Supernova star'),
+                              (2, 'Supernova stars'))
+SELECT * FROM docs WHERE to_tsvector('exact_and_linguistic', txt) @@ (to_tsquery('simple', 'stars') &amp;&amp; to_tsquery('english', 'supernovae'));
+ id |       txt       
+----+-----------------
+  2 | Supernova stars
+</programlisting>
+
+   <caution>
+    <para>
+     Due to lack of information about origin of each lexeme in <literal>tsvector</literal> may
+     lead to false-positive triggers in case of stemmed form being used as exact form in a query.
+    </para>
+   </caution>
+
   <para>
-   A filtering dictionary can be placed anywhere in the list, except at the
-   end where it'd be useless.  Filtering dictionaries are useful to partially
+   Filtering dictionaries are useful to partially
    normalize words to simplify the task of later dictionaries.  For example,
    a filtering dictionary could be used to remove accents from accented
    letters, as is done by the <xref linkend="unaccent"/> module.
+   Filter dictionary should be placed at left of <literal>MAP</literal>
+   operator. If filter dictionary returns <literal>NULL</literal> it pass initial token
+   to the right subexpression.
   </para>
 
   <sect2 id="textsearch-stopwords">
@@ -2463,9 +2552,9 @@ SELECT ts_lexize('public.simple_dict','The');
 
 <screen>
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | Paris | {english_stem} | english_stem  | english_stem | {pari}
 
 CREATE TEXT SEARCH DICTIONARY my_synonym (
     TEMPLATE = synonym,
@@ -2477,9 +2566,12 @@ ALTER TEXT SEARCH CONFIGURATION english
     WITH my_synonym, english_stem;
 
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |       dictionaries        | dictionary | lexemes 
------------+-----------------+-------+---------------------------+------------+---------
- asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris}
+   alias   |   description   | token |       dictionaries        |                configuration                |  command   | lexemes 
+-----------+-----------------+-------+---------------------------+---------------------------------------------+------------+---------
+ asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | CASE my_synonym WHEN MATCH THEN KEEP       +| my_synonym | {paris}
+           |                 |       |                           | ELSE CASE english_stem WHEN MATCH THEN KEEP+|            | 
+           |                 |       |                           | END                                        +|            | 
+           |                 |       |                           | END                                         |            | 
 </screen>
    </para>
 
@@ -3108,6 +3200,21 @@ CREATE TEXT SEARCH DICTIONARY english_ispell (
 ALTER TEXT SEARCH CONFIGURATION pg
     ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
                       word, hword, hword_part
+    WITH 
+      CASE pg_dict WHEN MATCH THEN KEEP
+      ELSE
+          CASE english_ispell WHEN MATCH THEN KEEP
+          ELSE english_stem
+          END
+      END;
+</programlisting>
+
+    Or use alternative comma-separated syntax:
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION pg
+    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
+                      word, hword, hword_part
     WITH pg_dict, english_ispell, english_stem;
 </programlisting>
 
@@ -3183,7 +3290,8 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
          OUT <replaceable class="parameter">description</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">token</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">dictionaries</replaceable> <type>regdictionary[]</type>,
-         OUT <replaceable class="parameter">dictionary</replaceable> <type>regdictionary</type>,
+         OUT <replaceable class="parameter">configuration</replaceable> <type>text</type>,
+         OUT <replaceable class="parameter">command</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)
          returns setof record
 </synopsis>
@@ -3227,14 +3335,20 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionary</replaceable> <type>regdictionary</type> &mdash; the dictionary
-       that recognized the token, or <literal>NULL</literal> if none did
+       <replaceable>configuration</replaceable> <type>text</type> &mdash; the
+       configuration defined for this token type
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       <replaceable>command</replaceable> <type>text</type> &mdash; the command that describes
+       the way the output was produced
       </para>
      </listitem>
      <listitem>
       <para>
        <replaceable>lexemes</replaceable> <type>text[]</type> &mdash; the lexeme(s) produced
-       by the dictionary that recognized the token, or <literal>NULL</literal> if
+       by the command selected according conditions, or <literal>NULL</literal> if
        none did; an empty array (<literal>{}</literal>) means it was recognized as a
        stop word
       </para>
@@ -3247,32 +3361,32 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
 
 <screen>
 SELECT * FROM ts_debug('english','a fat  cat sat on a mat - it ate a fat rats');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | cat   | {english_stem} | english_stem | {cat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | sat   | {english_stem} | english_stem | {sat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | on    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | mat   | {english_stem} | english_stem | {mat}
- blank     | Space symbols   |       | {}             |              | 
- blank     | Space symbols   | -     | {}             |              | 
- asciiword | Word, all ASCII | it    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | ate   | {english_stem} | english_stem | {ate}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | rats  | {english_stem} | english_stem | {rat}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | cat   | {english_stem} | english_stem  | english_stem | {cat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | sat   | {english_stem} | english_stem  | english_stem | {sat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | on    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | mat   | {english_stem} | english_stem  | english_stem | {mat}
+ blank     | Space symbols   |       |                |               |              | 
+ blank     | Space symbols   | -     |                |               |              | 
+ asciiword | Word, all ASCII | it    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | ate   | {english_stem} | english_stem  | english_stem | {ate}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | rats  | {english_stem} | english_stem  | english_stem | {rat}
 </screen>
   </para>
 
@@ -3298,13 +3412,22 @@ ALTER TEXT SEARCH CONFIGURATION public.english
 
 <screen>
 SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
-   alias   |   description   |    token    |         dictionaries          |   dictionary   |   lexemes   
------------+-----------------+-------------+-------------------------------+----------------+-------------
- asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | english_ispell | {}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | english_ispell | {bright}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | english_stem   | {supernova}
+   alias   |   description   |    token    |         dictionaries          |                configuration                |     command      |   lexemes   
+-----------+-----------------+-------------+-------------------------------+---------------------------------------------+------------------+-------------
+ asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {bright}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_stem     | {supernova}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
 </screen>
 
   <para>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 5652e9e..f9fdf4d 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -944,55 +944,14 @@ GRANT SELECT (subdbid, subname, subowner, subenabled, subslotname, subpublicatio
 -- Tsearch debug function.  Defined here because it'd be pretty unwieldy
 -- to put it into pg_proc.h
 
-CREATE FUNCTION ts_debug(IN config regconfig, IN document text,
-    OUT alias text,
-    OUT description text,
-    OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
-    OUT lexemes text[])
-RETURNS SETOF record AS
-$$
-SELECT
-    tt.alias AS alias,
-    tt.description AS description,
-    parse.token AS token,
-    ARRAY ( SELECT m.mapdict::pg_catalog.regdictionary
-            FROM pg_catalog.pg_ts_config_map AS m
-            WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-            ORDER BY m.mapseqno )
-    AS dictionaries,
-    ( SELECT mapdict::pg_catalog.regdictionary
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS dictionary,
-    ( SELECT pg_catalog.ts_lexize(mapdict, parse.token)
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS lexemes
-FROM pg_catalog.ts_parse(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 ), $2
-    ) AS parse,
-     pg_catalog.ts_token_type(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 )
-    ) AS tt
-WHERE tt.tokid = parse.tokid
-$$
-LANGUAGE SQL STRICT STABLE PARALLEL SAFE;
-
-COMMENT ON FUNCTION ts_debug(regconfig,text) IS
-    'debug function for text search configuration';
 
 CREATE FUNCTION ts_debug(IN document text,
     OUT alias text,
     OUT description text,
     OUT token text,
     OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
+    OUT configuration text,
+    OUT command text,
     OUT lexemes text[])
 RETURNS SETOF record AS
 $$
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index 3a84351..53ee576 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -39,9 +39,12 @@
 #include "nodes/makefuncs.h"
 #include "parser/parse_func.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_public.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/jsonb.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "utils/syscache.h"
@@ -935,11 +938,22 @@ makeConfigurationDependencies(HeapTuple tuple, bool removeOld,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			TSMapElement *mapdicts = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			Oid		   *dictionaryOids = TSMapGetDictionaries(mapdicts);
+			Oid		   *currentOid = dictionaryOids;
 
-			referenced.classId = TSDictionaryRelationId;
-			referenced.objectId = cfgmap->mapdict;
-			referenced.objectSubId = 0;
-			add_exact_object_address(&referenced, addrs);
+			while (*currentOid != InvalidOid)
+			{
+				referenced.classId = TSDictionaryRelationId;
+				referenced.objectId = *currentOid;
+				referenced.objectSubId = 0;
+				add_exact_object_address(&referenced, addrs);
+
+				currentOid++;
+			}
+
+			pfree(dictionaryOids);
+			TSMapElementFree(mapdicts);
 		}
 
 		systable_endscan(scan);
@@ -1091,8 +1105,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			mapvalues[Anum_pg_ts_config_map_mapcfg - 1] = cfgOid;
 			mapvalues[Anum_pg_ts_config_map_maptokentype - 1] = cfgmap->maptokentype;
-			mapvalues[Anum_pg_ts_config_map_mapseqno - 1] = cfgmap->mapseqno;
-			mapvalues[Anum_pg_ts_config_map_mapdict - 1] = cfgmap->mapdict;
+			mapvalues[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(&cfgmap->mapdicts);
 
 			newmaptup = heap_form_tuple(mapRel->rd_att, mapvalues, mapnulls);
 
@@ -1195,7 +1208,7 @@ AlterTSConfiguration(AlterTSConfigurationStmt *stmt)
 	relMap = heap_open(TSConfigMapRelationId, RowExclusiveLock);
 
 	/* Add or drop mappings */
-	if (stmt->dicts)
+	if (stmt->dicts || stmt->dict_map)
 		MakeConfigurationMapping(stmt, tup, relMap);
 	else if (stmt->tokentype)
 		DropConfigurationMapping(stmt, tup, relMap);
@@ -1271,6 +1284,59 @@ getTokenTypes(Oid prsId, List *tokennames)
 }
 
 /*
+ * Parse parse node extracted from dictionary mapping and transform it into
+ * internal representation of dictionary mapping.
+ */
+static TSMapElement *
+ParseTSMapConfig(DictMapElem *elem)
+{
+	TSMapElement *result = palloc0(sizeof(TSMapElement));
+
+	if (elem->kind == DICT_MAP_CASE)
+	{
+		TSMapCase  *caseObject = palloc0(sizeof(TSMapCase));
+		DictMapCase *caseASTObject = elem->data;
+
+		caseObject->condition = ParseTSMapConfig(caseASTObject->condition);
+		caseObject->command = ParseTSMapConfig(caseASTObject->command);
+
+		if (caseASTObject->elsebranch)
+			caseObject->elsebranch = ParseTSMapConfig(caseASTObject->elsebranch);
+
+		caseObject->match = caseASTObject->match;
+
+		caseObject->condition->parent = result;
+		caseObject->command->parent = result;
+
+		result->type = TSMAP_CASE;
+		result->value.objectCase = caseObject;
+	}
+	else if (elem->kind == DICT_MAP_EXPRESSION)
+	{
+		TSMapExpression *expression = palloc0(sizeof(TSMapExpression));
+		DictMapExprElem *expressionAST = elem->data;
+
+		expression->left = ParseTSMapConfig(expressionAST->left);
+		expression->right = ParseTSMapConfig(expressionAST->right);
+		expression->operator = expressionAST->oper;
+
+		result->type = TSMAP_EXPRESSION;
+		result->value.objectExpression = expression;
+	}
+	else if (elem->kind == DICT_MAP_KEEP)
+	{
+		result->value.objectExpression = NULL;
+		result->type = TSMAP_KEEP;
+	}
+	else if (elem->kind == DICT_MAP_DICTIONARY)
+	{
+		result->value.objectDictionary = get_ts_dict_oid(elem->data, false);
+		result->type = TSMAP_DICTIONARY;
+	}
+	return result;
+}
+
+/*
  * ALTER TEXT SEARCH CONFIGURATION ADD/ALTER MAPPING
  */
 static void
@@ -1286,8 +1352,9 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	Oid			prsId;
 	int		   *tokens,
 				ntoken;
-	Oid		   *dictIds;
-	int			ndict;
+	Oid		   *dictIds = NULL;
+	int			ndict = 0;
+	TSMapElement *config = NULL;
 	ListCell   *c;
 
 	prsId = ((Form_pg_ts_config) GETSTRUCT(tup))->cfgparser;
@@ -1326,15 +1393,18 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	/*
 	 * Convert list of dictionary names to array of dict OIDs
 	 */
-	ndict = list_length(stmt->dicts);
-	dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
-	i = 0;
-	foreach(c, stmt->dicts)
+	if (stmt->dicts)
 	{
-		List	   *names = (List *) lfirst(c);
+		ndict = list_length(stmt->dicts);
+		dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
+		i = 0;
+		foreach(c, stmt->dicts)
+		{
+			List	   *names = (List *) lfirst(c);
 
-		dictIds[i] = get_ts_dict_oid(names, false);
-		i++;
+			dictIds[i] = get_ts_dict_oid(names, false);
+			i++;
+		}
 	}
 
 	if (stmt->replace)
@@ -1356,6 +1426,10 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			Datum		repl_val[Natts_pg_ts_config_map];
+			bool		repl_null[Natts_pg_ts_config_map];
+			bool		repl_repl[Natts_pg_ts_config_map];
+			HeapTuple	newtup;
 
 			/*
 			 * check if it's one of target token types
@@ -1379,25 +1453,21 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 			/*
 			 * replace dictionary if match
 			 */
-			if (cfgmap->mapdict == dictOld)
-			{
-				Datum		repl_val[Natts_pg_ts_config_map];
-				bool		repl_null[Natts_pg_ts_config_map];
-				bool		repl_repl[Natts_pg_ts_config_map];
-				HeapTuple	newtup;
-
-				memset(repl_val, 0, sizeof(repl_val));
-				memset(repl_null, false, sizeof(repl_null));
-				memset(repl_repl, false, sizeof(repl_repl));
-
-				repl_val[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictNew);
-				repl_repl[Anum_pg_ts_config_map_mapdict - 1] = true;
-
-				newtup = heap_modify_tuple(maptup,
-										   RelationGetDescr(relMap),
-										   repl_val, repl_null, repl_repl);
-				CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
-			}
+			config = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			TSMapReplaceDictionary(config, dictOld, dictNew);
+
+			memset(repl_val, 0, sizeof(repl_val));
+			memset(repl_null, false, sizeof(repl_null));
+			memset(repl_repl, false, sizeof(repl_repl));
+
+			repl_val[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
+			repl_repl[Anum_pg_ts_config_map_mapdicts - 1] = true;
+
+			newtup = heap_modify_tuple(maptup,
+									   RelationGetDescr(relMap),
+									   repl_val, repl_null, repl_repl);
+			CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
+			pfree(config);
 		}
 
 		systable_endscan(scan);
@@ -1407,24 +1477,22 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		/*
 		 * Insertion of new entries
 		 */
+		config = ParseTSMapConfig(stmt->dict_map);
+
 		for (i = 0; i < ntoken; i++)
 		{
-			for (j = 0; j < ndict; j++)
-			{
-				Datum		values[Natts_pg_ts_config_map];
-				bool		nulls[Natts_pg_ts_config_map];
+			Datum		values[Natts_pg_ts_config_map];
+			bool		nulls[Natts_pg_ts_config_map];
 
-				memset(nulls, false, sizeof(nulls));
-				values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
-				values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
-				values[Anum_pg_ts_config_map_mapseqno - 1] = Int32GetDatum(j + 1);
-				values[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictIds[j]);
+			memset(nulls, false, sizeof(nulls));
+			values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
+			values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
+			values[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
 
-				tup = heap_form_tuple(relMap->rd_att, values, nulls);
-				CatalogTupleInsert(relMap, tup);
+			tup = heap_form_tuple(relMap->rd_att, values, nulls);
+			CatalogTupleInsert(relMap, tup);
 
-				heap_freetuple(tup);
-			}
+			heap_freetuple(tup);
 		}
 	}
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index fd3001c..3e2385f 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4389,6 +4389,42 @@ _copyReassignOwnedStmt(const ReassignOwnedStmt *from)
 	return newnode;
 }
 
+static DictMapElem *
+_copyDictMapElem(const DictMapElem *from)
+{
+	DictMapElem *newnode = makeNode(DictMapElem);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(data);
+
+	return newnode;
+}
+
+static DictMapExprElem *
+_copyDictMapExprElem(const DictMapExprElem *from)
+{
+	DictMapExprElem *newnode = makeNode(DictMapExprElem);
+
+	COPY_NODE_FIELD(left);
+	COPY_NODE_FIELD(right);
+	COPY_SCALAR_FIELD(oper);
+
+	return newnode;
+}
+
+static DictMapCase *
+_copyDictMapCase(const DictMapCase *from)
+{
+	DictMapCase *newnode = makeNode(DictMapCase);
+
+	COPY_NODE_FIELD(condition);
+	COPY_NODE_FIELD(command);
+	COPY_NODE_FIELD(elsebranch);
+	COPY_SCALAR_FIELD(match);
+
+	return newnode;
+}
+
 static AlterTSDictionaryStmt *
 _copyAlterTSDictionaryStmt(const AlterTSDictionaryStmt *from)
 {
@@ -5396,6 +5432,15 @@ copyObjectImpl(const void *from)
 		case T_ReassignOwnedStmt:
 			retval = _copyReassignOwnedStmt(from);
 			break;
+		case T_DictMapExprElem:
+			retval = _copyDictMapExprElem(from);
+			break;
+		case T_DictMapElem:
+			retval = _copyDictMapElem(from);
+			break;
+		case T_DictMapCase:
+			retval = _copyDictMapCase(from);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _copyAlterTSDictionaryStmt(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 7d2aa1a..c277478 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2188,6 +2188,36 @@ _equalReassignOwnedStmt(const ReassignOwnedStmt *a, const ReassignOwnedStmt *b)
 }
 
 static bool
+_equalDictMapElem(const DictMapElem *a, const DictMapElem *b)
+{
+	COMPARE_NODE_FIELD(data);
+	COMPARE_SCALAR_FIELD(kind);
+
+	return true;
+}
+
+static bool
+_equalDictMapExprElem(const DictMapExprElem *a, const DictMapExprElem *b)
+{
+	COMPARE_NODE_FIELD(left);
+	COMPARE_NODE_FIELD(right);
+	COMPARE_SCALAR_FIELD(oper);
+
+	return true;
+}
+
+static bool
+_equalDictMapCase(const DictMapCase *a, const DictMapCase *b)
+{
+	COMPARE_NODE_FIELD(condition);
+	COMPARE_NODE_FIELD(command);
+	COMPARE_NODE_FIELD(elsebranch);
+	COMPARE_SCALAR_FIELD(match);
+
+	return true;
+}
+
+static bool
 _equalAlterTSDictionaryStmt(const AlterTSDictionaryStmt *a, const AlterTSDictionaryStmt *b)
 {
 	COMPARE_NODE_FIELD(dictname);
@@ -3533,6 +3563,15 @@ equal(const void *a, const void *b)
 		case T_ReassignOwnedStmt:
 			retval = _equalReassignOwnedStmt(a, b);
 			break;
+		case T_DictMapExprElem:
+			retval = _equalDictMapExprElem(a, b);
+			break;
+		case T_DictMapElem:
+			retval = _equalDictMapElem(a, b);
+			break;
+		case T_DictMapCase:
+			retval = _equalDictMapCase(a, b);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _equalAlterTSDictionaryStmt(a, b);
 			break;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 5329432..8b752ac 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -52,6 +52,7 @@
 #include "catalog/namespace.h"
 #include "catalog/pg_am.h"
 #include "catalog/pg_trigger.h"
+#include "catalog/pg_ts_config_map.h"
 #include "commands/defrem.h"
 #include "commands/trigger.h"
 #include "nodes/makefuncs.h"
@@ -241,6 +242,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionSpec		*partspec;
 	PartitionBoundSpec	*partboundspec;
 	RoleSpec			*rolespec;
+	DictMapElem			*dmapelem;
 }
 
 %type <node>	stmt schema_stmt
@@ -308,7 +310,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <ival>	vacuum_option_list vacuum_option_elem
 %type <boolean>	opt_or_replace
 				opt_grant_grant_option opt_grant_admin_option
-				opt_nowait opt_if_exists opt_with_data
+				opt_nowait opt_if_exists opt_with_data opt_dictionary_map_no
 %type <ival>	opt_nowait_or_skip
 
 %type <list>	OptRoleList AlterOptRoleList
@@ -582,6 +584,12 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>		hash_partbound partbound_datum_list range_datum_list
 %type <defelt>		hash_partbound_elem
 
+%type <ival>		dictionary_map_set_expr_operator
+%type <dmapelem>	dictionary_map_dict dictionary_map_command_expr_paren
+					dictionary_map_set_expr dictionary_map_case
+					dictionary_map_action opt_dictionary_map_case_else
+					dictionary_config dictionary_config_comma
+
 /*
  * Non-keyword token types.  These are hard-wired into the "flex" lexer.
  * They must be listed first so that their numeric codes do not depend on
@@ -643,13 +651,14 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	JOIN
 
-	KEY
+	KEEP KEY
 
 	LABEL LANGUAGE LARGE_P LAST_P LATERAL_P
 	LEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL
 	LOCALTIME LOCALTIMESTAMP LOCATION LOCK_P LOCKED LOGGED
 
-	MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE MONTH_P MOVE
+	MAP MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE
+	MONTH_P MOVE
 
 	NAME_P NAMES NATIONAL NATURAL NCHAR NEW NEXT NO NONE
 	NOT NOTHING NOTIFY NOTNULL NOWAIT NULL_P NULLIF
@@ -10345,24 +10354,26 @@ AlterTSDictionaryStmt:
 		;
 
 AlterTSConfigurationStmt:
-			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with any_name_list
+			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ADD_MAPPING;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = false;
 					n->replace = false;
 					$$ = (Node*)n;
 				}
-			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with any_name_list
+			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ALTER_MAPPING_FOR_TOKEN;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = true;
 					n->replace = false;
 					$$ = (Node*)n;
@@ -10414,6 +10425,117 @@ any_with:	WITH									{}
 			| WITH_LA								{}
 		;
 
+opt_dictionary_map_no:
+			NO { $$ = true; }
+			| { $$ = false; }
+		;
+
+dictionary_config_comma:
+			dictionary_map_dict { $$ = $1; }
+			| dictionary_map_dict ',' dictionary_config_comma
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = TSMAP_OP_COMMA;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_config:
+			dictionary_map_set_expr { $$ = $1; }
+			| dictionary_map_dict ',' dictionary_config_comma
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = TSMAP_OP_COMMA;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_action:
+			KEEP
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_KEEP;
+				n->data = NULL;
+				$$ = n;
+			}
+			| dictionary_map_set_expr { $$ = $1; }
+		;
+
+opt_dictionary_map_case_else:
+			ELSE dictionary_map_set_expr { $$ = $2; }
+			| { $$ = NULL; }
+		;
+
+dictionary_map_case:
+			CASE dictionary_map_set_expr WHEN opt_dictionary_map_no MATCH THEN dictionary_map_action opt_dictionary_map_case_else END_P
+			{
+				DictMapCase *n = makeNode(DictMapCase);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->condition = $2;
+				n->command = $7;
+				n->elsebranch = $8;
+				n->match = !$4;
+
+				r->kind = DICT_MAP_CASE;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_set_expr_operator:
+			UNION { $$ = TSMAP_OP_UNION; }
+			| EXCEPT { $$ = TSMAP_OP_EXCEPT; }
+			| INTERSECT { $$ = TSMAP_OP_INTERSECT; }
+			| MAP { $$ = TSMAP_OP_MAP; }
+		;
+
+dictionary_map_set_expr:
+			dictionary_map_command_expr_paren { $$ = $1; }
+			| dictionary_map_set_expr dictionary_map_set_expr_operator dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = $2;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_command_expr_paren:
+			'(' dictionary_map_set_expr ')'	{ $$ = $2; }
+			| dictionary_map_dict			{ $$ = $1; }
+			| dictionary_map_case			{ $$ = $1; }
+		;
+
+dictionary_map_dict:
+			any_name
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_DICTIONARY;
+				n->data = $1;
+				$$ = n;
+			}
+		;
 
 /*****************************************************************************
  *
@@ -15064,6 +15186,7 @@ unreserved_keyword:
 			| LOCK_P
 			| LOCKED
 			| LOGGED
+			| MAP
 			| MAPPING
 			| MATCH
 			| MATERIALIZED
@@ -15368,6 +15491,7 @@ reserved_keyword:
 			| INITIALLY
 			| INTERSECT
 			| INTO
+			| KEEP
 			| LATERAL_P
 			| LEADING
 			| LIMIT
diff --git a/src/backend/tsearch/Makefile b/src/backend/tsearch/Makefile
index 227468a..e61ad4f 100644
--- a/src/backend/tsearch/Makefile
+++ b/src/backend/tsearch/Makefile
@@ -26,7 +26,7 @@ DICTFILES_PATH=$(addprefix dicts/,$(DICTFILES))
 OBJS = ts_locale.o ts_parse.o wparser.o wparser_def.o dict.o \
 	dict_simple.o dict_synonym.o dict_thesaurus.o \
 	dict_ispell.o regis.o spell.o \
-	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o
+	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o ts_configmap.o
 
 include $(top_srcdir)/src/backend/common.mk
 
diff --git a/src/backend/tsearch/ts_configmap.c b/src/backend/tsearch/ts_configmap.c
new file mode 100644
index 0000000..2b9d718
--- /dev/null
+++ b/src/backend/tsearch/ts_configmap.c
@@ -0,0 +1,1054 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.c
+ *		internal representation of text search configuration and utilities for it
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/tsearch/ts_confimap.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include <ctype.h>
+
+#include "access/heapam.h"
+#include "access/genam.h"
+#include "access/htup_details.h"
+#include "access/sysattr.h"
+#include "catalog/indexing.h"
+#include "catalog/pg_ts_dict.h"
+#include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "utils/fmgroids.h"
+
+/*
+ * Size selected arbitrary, based on assumption that 1024 frames of stack
+ * is enough for parsing of configurations
+ */
+#define JSONB_PARSE_STATE_STACK_SIZE 1024
+
+/*
+ * Used during the parsing of TSMapElement from JSONB into internal
+ * data structures.
+ */
+typedef enum TSMapParseState
+{
+	TSMPS_WAIT_ELEMENT,
+	TSMPS_READ_DICT_OID,
+	TSMPS_READ_COMPLEX_OBJ,
+	TSMPS_READ_EXPRESSION,
+	TSMPS_READ_CASE,
+	TSMPS_READ_OPERATOR,
+	TSMPS_READ_COMMAND,
+	TSMPS_READ_CONDITION,
+	TSMPS_READ_ELSEBRANCH,
+	TSMPS_READ_MATCH,
+	TSMPS_READ_KEEP,
+	TSMPS_READ_LEFT,
+	TSMPS_READ_RIGHT
+} TSMapParseState;
+
+/*
+ * Context used during JSONB parsing to construct a TSMap
+ */
+typedef struct TSMapJsonbParseData
+{
+	TSMapParseState states[JSONB_PARSE_STATE_STACK_SIZE];	/* Stack of states of
+															 * JSONB parsing
+															 * automaton */
+	int			statesIndex;	/* Index of current stack frame */
+	TSMapElement *element;		/* Element that is in construction now */
+} TSMapJsonbParseData;
+
+static JsonbValue *TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState);
+static TSMapElement * JsonbToTSMapElement(JsonbContainer *root);
+
+/*
+ * Print name of the dictionary into StringInfo variable result
+ */
+void
+TSMapPrintDictName(Oid dictId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_ts_dict dict;
+
+	if (false)
+		return;
+
+	maprel = heap_open(TSDictionaryRelationId, AccessShareLock);
+	mapidx = index_open(TSDictionaryOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(dictId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	dict = (Form_pg_ts_dict) GETSTRUCT(maptup);
+	appendStringInfoString(result, dict->dictname.data);
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print the expression into StringInfo variable result
+ */
+static void
+TSMapPrintExpression(TSMapExpression *expression, StringInfo result)
+{
+
+	if (expression->left)
+		TSMapPrintElement(expression->left, result);
+
+	switch (expression->operator)
+	{
+		case TSMAP_OP_UNION:
+			appendStringInfoString(result, " UNION ");
+			break;
+		case TSMAP_OP_EXCEPT:
+			appendStringInfoString(result, " EXCEPT ");
+			break;
+		case TSMAP_OP_INTERSECT:
+			appendStringInfoString(result, " INTERSECT ");
+			break;
+		case TSMAP_OP_COMMA:
+			appendStringInfoString(result, ", ");
+			break;
+		case TSMAP_OP_MAP:
+			appendStringInfoString(result, " MAP ");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains invalid expression operator.")));
+			break;
+	}
+
+	if (expression->right)
+		TSMapPrintElement(expression->right, result);
+}
+
+/*
+ * Print the case configuration construction into StringInfo variable result
+ */
+static void
+TSMapPrintCase(TSMapCase *caseObject, StringInfo result)
+{
+	appendStringInfoString(result, "CASE ");
+
+	TSMapPrintElement(caseObject->condition, result);
+
+	appendStringInfoString(result, " WHEN ");
+	if (!caseObject->match)
+		appendStringInfoString(result, "NO ");
+	appendStringInfoString(result, "MATCH THEN ");
+
+	TSMapPrintElement(caseObject->command, result);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		appendStringInfoString(result, "\nELSE ");
+		TSMapPrintElement(caseObject->elsebranch, result);
+	}
+	appendStringInfoString(result, "\nEND");
+}
+
+/*
+ * Print the element into StringInfo result.
+ * Uses other function and serves for element type detection.
+ */
+void
+TSMapPrintElement(TSMapElement *element, StringInfo result)
+{
+	switch (element->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapPrintExpression(element->value.objectExpression, result);
+			break;
+		case TSMAP_DICTIONARY:
+			TSMapPrintDictName(element->value.objectDictionary, result);
+			break;
+		case TSMAP_CASE:
+			TSMapPrintCase(element->value.objectCase, result);
+			break;
+		case TSMAP_KEEP:
+			appendStringInfoString(result, "KEEP");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains elements with invalid type.")));
+			break;
+	}
+}
+
+/*
+ * Print the text search configuration as a text.
+ */
+Datum
+dictionary_mapping_to_text(PG_FUNCTION_ARGS)
+{
+	Oid			cfgOid = PG_GETARG_OID(0);
+	int32		tokentype = PG_GETARG_INT32(1);
+	StringInfo	rawResult;
+	text	   *result = NULL;
+	TSConfigCacheEntry *cacheEntry;
+
+	cacheEntry = lookup_ts_config_cache(cfgOid);
+	rawResult = makeStringInfo();
+	initStringInfo(rawResult);
+
+	if (cacheEntry->lenmap > tokentype && cacheEntry->map[tokentype] != NULL)
+	{
+		TSMapElement *element = cacheEntry->map[tokentype];
+
+		TSMapPrintElement(element, rawResult);
+	}
+
+	result = cstring_to_text(rawResult->data);
+	pfree(rawResult);
+	PG_RETURN_TEXT_P(result);
+}
+
+/* ----------------
+ * Functions used to convert TSMap structure into JSONB representation
+ * ----------------
+ */
+
+/*
+ * Convert an integer value into JsonbValue
+ */
+static JsonbValue *
+IntToJsonbValue(int intValue)
+{
+	char		buffer[16];
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	/*
+	 * String size is based on limit of int capacity up to 12 chars with sign
+	 * and NULL-character
+	 */
+	memset(buffer, 0, sizeof(char) * 12);
+
+	pg_ltoa(intValue, buffer);
+	value->type = jbvNumeric;
+	value->val.numeric = DatumGetNumeric(DirectFunctionCall3(numeric_in,
+															 CStringGetDatum(buffer),
+															 ObjectIdGetDatum(InvalidOid),
+															 Int32GetDatum(-1)
+															 ));
+	return value;
+}
+
+/*
+ * Convert a FTS configuration expression into JsonbValue
+ */
+static JsonbValue *
+TSMapExpressionToJsonbValue(TSMapExpression *expression, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("operator");
+	key.val.string.val = "operator";
+	value = IntToJsonbValue(expression->operator);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("left");
+	key.val.string.val = "left";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->left, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("right");
+	key.val.string.val = "right";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->right, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS configuration case into JsonbValue
+ */
+static JsonbValue *
+TSMapCaseToJsonbValue(TSMapCase *caseObject, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("condition");
+	key.val.string.val = "condition";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->condition, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("command");
+	key.val.string.val = "command";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->command, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		key.type = jbvString;
+		key.val.string.len = strlen("elsebranch");
+		key.val.string.val = "elsebranch";
+
+		pushJsonbValue(&jsonbState, WJB_KEY, &key);
+		value = TSMapElementToJsonbValue(caseObject->elsebranch, jsonbState);
+
+		if (value && IsAJsonbScalar(value))
+			pushJsonbValue(&jsonbState, WJB_VALUE, value);
+	}
+
+	key.type = jbvString;
+	key.val.string.len = strlen("match");
+	key.val.string.val = "match";
+
+	value = IntToJsonbValue(caseObject->match ? 1 : 0);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS KEEP command into JsonbValue
+ */
+static JsonbValue *
+TSMapKeepToJsonbValue(JsonbParseState *jsonbState)
+{
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	value->type = jbvString;
+	value->val.string.len = strlen("keep");
+	value->val.string.val = "keep";
+
+	return pushJsonbValue(&jsonbState, WJB_VALUE, value);
+}
+
+/*
+ * Convert a FTS element into JsonbValue. Common point for all types of TSMapElement
+ */
+JsonbValue *
+TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState)
+{
+	JsonbValue *result = NULL;
+
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_EXPRESSION:
+				result = TSMapExpressionToJsonbValue(element->value.objectExpression, jsonbState);
+				break;
+			case TSMAP_DICTIONARY:
+				result = IntToJsonbValue(element->value.objectDictionary);
+				break;
+			case TSMAP_CASE:
+				result = TSMapCaseToJsonbValue(element->value.objectCase, jsonbState);
+				break;
+			case TSMAP_KEEP:
+				result = TSMapKeepToJsonbValue(jsonbState);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Required text search configuration contains elements with invalid type.")));
+				break;
+		}
+	}
+	return result;
+}
+
+/*
+ * Convert a FTS configuration into JSONB
+ */
+Jsonb *
+TSMapToJsonb(TSMapElement *element)
+{
+	JsonbParseState *jsonbState = NULL;
+	JsonbValue *out;
+	Jsonb	   *result;
+
+	out = TSMapElementToJsonbValue(element, jsonbState);
+
+	result = JsonbValueToJsonb(out);
+	return result;
+}
+
+/* ----------------
+ * Functions used to get TSMap structure from JSONB representation
+ * ----------------
+ */
+
+/*
+ * Extract an integer from JsonbValue
+ */
+static int
+JsonbValueToInt(JsonbValue *value)
+{
+	char	   *str;
+
+	str = DatumGetCString(DirectFunctionCall1(numeric_out, NumericGetDatum(value->val.numeric)));
+	return pg_atoi(str, sizeof(int), 0);
+}
+
+/*
+ * Check is a key one of FTS configuration case fields
+ */
+static bool
+IsTSMapCaseKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "match") == 0 || strcmp(key, "condition") == 0 || strcmp(key, "command") == 0 || strcmp(key, "elsebranch") == 0;
+}
+
+/*
+ * Check is a key one of FTS configuration expression fields
+ */
+static bool
+IsTSMapExpressionKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "operator") == 0 || strcmp(key, "left") == 0 || strcmp(key, "right") == 0;
+}
+
+/*
+ * Configure parseData->element according to value (key)
+ */
+static void
+JsonbBeginObjectKey(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *parentElement = parseData->element;
+
+	parseData->element = palloc0(sizeof(TSMapElement));
+	parseData->element->parent = parentElement;
+
+	/* Overwrite object-type state based on key */
+	if (IsTSMapExpressionKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_EXPRESSION;
+		parseData->element->type = TSMAP_EXPRESSION;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapExpression));
+	}
+	else if (IsTSMapCaseKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CASE;
+		parseData->element->type = TSMAP_CASE;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapCase));
+	}
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration expression
+ */
+static void
+JsonbKeyExpressionProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "operator") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_OPERATOR;
+	else if (strcmp(key, "left") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_LEFT;
+	else if (strcmp(key, "right") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_RIGHT;
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration case
+ */
+static void
+JsonbKeyCaseProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "condition") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CONDITION;
+	else if (strcmp(key, "command") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_COMMAND;
+	else if (strcmp(key, "elsebranch") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_ELSEBRANCH;
+	else if (strcmp(key, "match") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_MATCH;
+}
+
+/*
+ * Convert a JsonbValue into OID TSMapElement
+ */
+static TSMapElement *
+JsonbValueToOidElement(JsonbValue *value, TSMapElement *parent)
+{
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	element->type = TSMAP_DICTIONARY;
+	element->value.objectDictionary = JsonbValueToInt(value);
+	return element;
+}
+
+/*
+ * Convert a JsonbValue into string TSMapElement.
+ * Used for special values such as KEEP command
+ */
+static TSMapElement *
+JsonbValueReadString(JsonbValue *value, TSMapElement *parent)
+{
+	char	   *str;
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	str = palloc0(sizeof(char) * (value->val.string.len + 1));
+	memcpy(str, value->val.string.val, sizeof(char) * value->val.string.len);
+
+	if (strcmp(str, "keep") == 0)
+		element->type = TSMAP_KEEP;
+
+	pfree(str);
+
+	return element;
+}
+
+/*
+ * Process a JsonbValue object
+ */
+static void
+JsonbProcessElement(JsonbIteratorToken r, JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *element = NULL;
+
+	switch (r)
+	{
+		case WJB_KEY:
+
+			/*
+			 * Construct an TSMapElement object. At first key inside JSONB
+			 * object a type is selected based on key.
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMPLEX_OBJ)
+				JsonbBeginObjectKey(value, parseData);
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_EXPRESSION)
+				JsonbKeyExpressionProcessing(value, parseData);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CASE)
+				JsonbKeyCaseProcessing(value, parseData);
+
+			break;
+		case WJB_BEGIN_OBJECT:
+
+			/*
+			 * Begin construction of new object
+			 */
+			parseData->statesIndex++;
+			parseData->states[parseData->statesIndex] = TSMPS_READ_COMPLEX_OBJ;
+			break;
+		case WJB_END_OBJECT:
+
+			/*
+			 * Save constructed object based on current state of parser
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->parent->value.objectExpression->left = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->parent->value.objectExpression->right = parseData->element;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->parent->value.objectCase->condition = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->parent->value.objectCase->command = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->parent->value.objectCase->elsebranch = parseData->element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_VALUE:
+
+			/*
+			 * Save a value inside constructing object
+			 */
+			if (value.type == jbvBinary)
+				element = JsonbToTSMapElement(value.val.binary.data);
+			else if (value.type == jbvString)
+				element = JsonbValueReadString(&value, parseData->element);
+			else if (value.type == jbvNumeric)
+				element = JsonbValueToOidElement(&value, parseData->element);
+			else
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains object with invalid type.")));
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->value.objectCase->condition = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->value.objectCase->command = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->value.objectCase->elsebranch = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_MATCH)
+				parseData->element->value.objectCase->match = JsonbValueToInt(&value) == 1 ? true : false;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_OPERATOR)
+				parseData->element->value.objectExpression->operator = JsonbValueToInt(&value);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->value.objectExpression->left = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->value.objectExpression->right = element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_ELEM:
+
+			/*
+			 * Store a simple element such as dictionary OID
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_WAIT_ELEMENT)
+			{
+				if (parseData->element != NULL)
+					parseData->element = JsonbValueToOidElement(&value, parseData->element->parent);
+				else
+					parseData->element = JsonbValueToOidElement(&value, NULL);
+			}
+			break;
+		default:
+			/* Ignore unused JSONB tokens */
+			break;
+	}
+}
+
+/*
+ * Convert a JsonbContainer into TSMapElement
+ */
+static TSMapElement *
+JsonbToTSMapElement(JsonbContainer *root)
+{
+	TSMapJsonbParseData parseData;
+	JsonbIteratorToken r;
+	JsonbIterator *it;
+	JsonbValue	val;
+
+	parseData.statesIndex = 0;
+	parseData.states[parseData.statesIndex] = TSMPS_WAIT_ELEMENT;
+	parseData.element = NULL;
+
+	it = JsonbIteratorInit(root);
+
+	while ((r = JsonbIteratorNext(&it, &val, true)) != WJB_DONE)
+		JsonbProcessElement(r, val, &parseData);
+
+	return parseData.element;
+}
+
+/*
+ * Convert a JSONB into TSMapElement
+ */
+TSMapElement *
+JsonbToTSMap(Jsonb *json)
+{
+	JsonbContainer *root = &json->root;
+
+	return JsonbToTSMapElement(root);
+}
+
+/* ----------------
+ * Text Search Configuration Map Utils
+ * ----------------
+ */
+
+/*
+ * Dynamically extendable list of OIDs
+ */
+typedef struct OidList
+{
+	Oid		   *data;
+	int			size;			/* Size of data array. Uninitialized elements
+								 * in data filled with InvalidOid */
+} OidList;
+
+/*
+ * Initialize a list
+ */
+static OidList *
+OidListInit()
+{
+	OidList    *result = palloc0(sizeof(OidList));
+
+	result->size = 1;
+	result->data = palloc0(result->size * sizeof(Oid));
+	result->data[0] = InvalidOid;
+	return result;
+}
+
+/*
+ * Add a new OID into list. If it is already stored in list, it won't be add second time.
+ */
+static void
+OidListAdd(OidList *list, Oid oid)
+{
+	int			i;
+
+	/* Search for the Oid in the list */
+	for (i = 0; list->data[i] != InvalidOid; i++)
+		if (list->data[i] == oid)
+			return;
+
+	/* If not found, insert it in the end of the list */
+	if (i >= list->size - 1)
+	{
+		int			j;
+
+		list->size = list->size * 2;
+		list->data = repalloc(list->data, sizeof(Oid) * list->size);
+
+		for (j = i; j < list->size; j++)
+			list->data[j] = InvalidOid;
+	}
+	list->data[i] = oid;
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement.
+ * Used for internal recursive calls.
+ */
+static void
+TSMapGetDictionariesInternal(TSMapElement *config, OidList *list)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapGetDictionariesInternal(config->value.objectExpression->left, list);
+			TSMapGetDictionariesInternal(config->value.objectExpression->right, list);
+			break;
+		case TSMAP_CASE:
+			TSMapGetDictionariesInternal(config->value.objectCase->command, list);
+			TSMapGetDictionariesInternal(config->value.objectCase->condition, list);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapGetDictionariesInternal(config->value.objectCase->elsebranch, list);
+			break;
+		case TSMAP_DICTIONARY:
+			OidListAdd(list, config->value.objectDictionary);
+			break;
+	}
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement
+ */
+Oid *
+TSMapGetDictionaries(TSMapElement *config)
+{
+	Oid		   *result;
+	OidList    *list = OidListInit();
+
+	TSMapGetDictionariesInternal(config, list);
+
+	result = list->data;
+	pfree(list);
+
+	return result;
+}
+
+/*
+ * Replace one dictionary OID with another in all instances inside a configuration
+ */
+void
+TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapReplaceDictionary(config->value.objectExpression->left, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectExpression->right, oldDict, newDict);
+			break;
+		case TSMAP_CASE:
+			TSMapReplaceDictionary(config->value.objectCase->command, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectCase->condition, oldDict, newDict);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapReplaceDictionary(config->value.objectCase->elsebranch, oldDict, newDict);
+			break;
+		case TSMAP_DICTIONARY:
+			if (config->value.objectDictionary == oldDict)
+				config->value.objectDictionary = newDict;
+			break;
+	}
+}
+
+/* ----------------
+ * Text Search Configuration Map Memory Management
+ * ----------------
+ */
+
+/*
+ * Move a FTS configuration expression to another memory context
+ */
+static TSMapElement *
+TSMapExpressionMoveToMemoryContext(TSMapExpression *expression, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapExpression *resultExpression = MemoryContextAlloc(context, sizeof(TSMapExpression));
+
+	memset(resultExpression, 0, sizeof(TSMapExpression));
+	result->value.objectExpression = resultExpression;
+	result->type = TSMAP_EXPRESSION;
+
+	resultExpression->operator = expression->operator;
+
+	resultExpression->left = TSMapMoveToMemoryContext(expression->left, context);
+	resultExpression->left->parent = result;
+
+	resultExpression->right = TSMapMoveToMemoryContext(expression->right, context);
+	resultExpression->right->parent = result;
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration case to another memory context
+ */
+static TSMapElement *
+TSMapCaseMoveToMemoryContext(TSMapCase *caseObject, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapCase  *resultCaseObject = MemoryContextAlloc(context, sizeof(TSMapCase));
+
+	memset(resultCaseObject, 0, sizeof(TSMapCase));
+	result->value.objectCase = resultCaseObject;
+	result->type = TSMAP_CASE;
+
+	resultCaseObject->match = caseObject->match;
+
+	resultCaseObject->command = TSMapMoveToMemoryContext(caseObject->command, context);
+	resultCaseObject->command->parent = result;
+
+	resultCaseObject->condition = TSMapMoveToMemoryContext(caseObject->condition, context);
+	resultCaseObject->condition->parent = result;
+
+	if (caseObject->elsebranch != NULL)
+	{
+		resultCaseObject->elsebranch = TSMapMoveToMemoryContext(caseObject->elsebranch, context);
+		resultCaseObject->elsebranch->parent = result;
+	}
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration to another memory context
+ */
+TSMapElement *
+TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context)
+{
+	TSMapElement *result = NULL;
+
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			result = TSMapExpressionMoveToMemoryContext(config->value.objectExpression, context);
+			break;
+		case TSMAP_CASE:
+			result = TSMapCaseMoveToMemoryContext(config->value.objectCase, context);
+			break;
+		case TSMAP_DICTIONARY:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_DICTIONARY;
+			result->value.objectDictionary = config->value.objectDictionary;
+			break;
+		case TSMAP_KEEP:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_KEEP;
+			result->value.object = NULL;
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains object with invalid type.")));
+			break;
+	}
+
+	return result;
+}
+
+/*
+ * Free memory occupied by FTS configuration expression
+ */
+static void
+TSMapExpressionFree(TSMapExpression *expression)
+{
+	if (expression->left)
+		TSMapElementFree(expression->left);
+	if (expression->right)
+		TSMapElementFree(expression->right);
+	pfree(expression);
+}
+
+/*
+ * Free memory occupied by FTS configuration case
+ */
+static void
+TSMapCaseFree(TSMapCase *caseObject)
+{
+	TSMapElementFree(caseObject->condition);
+	TSMapElementFree(caseObject->command);
+	TSMapElementFree(caseObject->elsebranch);
+	pfree(caseObject);
+}
+
+/*
+ * Free memory occupied by FTS configuration element
+ */
+void
+TSMapElementFree(TSMapElement *element)
+{
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_CASE:
+				TSMapCaseFree(element->value.objectCase);
+				break;
+			case TSMAP_EXPRESSION:
+				TSMapExpressionFree(element->value.objectExpression);
+				break;
+		}
+		pfree(element);
+	}
+}
+
+/*
+ * Do a deep comparison of two TSMapElements. Doesn't check parents of elements
+ */
+bool
+TSMapElementEquals(TSMapElement *a, TSMapElement *b)
+{
+	bool		result = true;
+
+	if (a->type == b->type)
+	{
+		switch (a->type)
+		{
+			case TSMAP_CASE:
+				if (!TSMapElementEquals(a->value.objectCase->condition, b->value.objectCase->condition))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectCase->command, b->value.objectCase->command))
+					result = false;
+
+				if (a->value.objectCase->elsebranch != NULL && b->value.objectCase->elsebranch != NULL)
+				{
+					if (!TSMapElementEquals(a->value.objectCase->elsebranch, b->value.objectCase->elsebranch))
+						result = false;
+				}
+				else if (a->value.objectCase->elsebranch != NULL || b->value.objectCase->elsebranch != NULL)
+					result = false;
+
+				if (a->value.objectCase->match != b->value.objectCase->match)
+					result = false;
+				break;
+			case TSMAP_EXPRESSION:
+				if (!TSMapElementEquals(a->value.objectExpression->left, b->value.objectExpression->left))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectExpression->right, b->value.objectExpression->right))
+					result = false;
+				if (a->value.objectExpression->operator != b->value.objectExpression->operator)
+					result = false;
+				break;
+			case TSMAP_DICTIONARY:
+				result = a->value.objectDictionary == b->value.objectDictionary;
+				break;
+			case TSMAP_KEEP:
+				result = true;
+		}
+	}
+	else
+		result = false;
+
+	return result;
+}
diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c
index 7b69ef5..f476abb 100644
--- a/src/backend/tsearch/ts_parse.c
+++ b/src/backend/tsearch/ts_parse.c
@@ -16,58 +16,157 @@
 
 #include "tsearch/ts_cache.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "funcapi.h"
 
 #define IGNORE_LONGLEXEME	1
 
-/*
+/*-------------------
  * Lexize subsystem
+ *-------------------
  */
 
+/*
+ * Representation of token produced by FTS parser. It contains intermediate
+ * lexemes in case of phrase dictionary processing.
+ */
 typedef struct ParsedLex
 {
-	int			type;
-	char	   *lemm;
-	int			lenlemm;
-	struct ParsedLex *next;
+	int			type;			/* Token type */
+	char	   *lemm;			/* Token itself */
+	int			lenlemm;		/* Length of the token string */
+	int			maplen;			/* Length of the map */
+	bool	   *accepted;		/* Is accepted by some dictionary */
+	bool	   *rejected;		/* Is rejected by all dictionaries */
+	bool	   *notFinished;	/* Some dictionary not finished processing and
+								 * waits for more tokens */
+	struct ParsedLex *next;		/* Next token in the list */
+	TSMapElement *relatedRule;	/* Rule which is used to produce lexemes from
+								 * the token */
 } ParsedLex;
 
+/*
+ * List of tokens produced by FTS parser.
+ */
 typedef struct ListParsedLex
 {
 	ParsedLex  *head;
 	ParsedLex  *tail;
 } ListParsedLex;
 
-typedef struct
+/*
+ * Dictionary state shared between processing of different tokens
+ */
+typedef struct DictState
 {
-	TSConfigCacheEntry *cfg;
-	Oid			curDictId;
-	int			posDict;
-	DictSubState dictState;
-	ParsedLex  *curSub;
-	ListParsedLex towork;		/* current list to work */
-	ListParsedLex waste;		/* list of lexemes that already lexized */
+	Oid			relatedDictionary;	/* DictState contains state of dictionary
+									 * with this Oid */
+	DictSubState subState;		/* Internal state of the dictionary used to
+								 * store some state between dictionary calls */
+	ListParsedLex acceptedTokens;	/* Tokens which are processed and
+									 * accepted, used in last returned result
+									 * by the dictionary */
+	ListParsedLex intermediateTokens;	/* Tokens which are not accepted, but
+										 * were processed by thesaurus-like
+										 * dictionary */
+	bool		storeToAccepted;	/* Should current token be appended to
+									 * accepted or intermediate tokens */
+	bool		processed;		/* Is the dictionary take control during
+								 * current token processing */
+	TSLexeme   *tmpResult;		/* Last result returned by thesaurus-like
+								 * dictionary, if dictionary still waiting for
+								 * more lexemes */
+} DictState;
 
-	/*
-	 * fields to store last variant to lexize (basically, thesaurus or similar
-	 * to, which wants	several lexemes
-	 */
+/*
+ * List of dictionary states
+ */
+typedef struct DictStateList
+{
+	int			listLength;
+	DictState  *states;
+} DictStateList;
 
-	ParsedLex  *lastRes;
-	TSLexeme   *tmpRes;
+/*
+ * Buffer entry with lexemes produced from current token
+ */
+typedef struct LexemesBufferEntry
+{
+	TSMapElement *key;	/* Element of the mapping configuration produced the entry */
+	ParsedLex  *token;	/* Token used for production of the lexemes */
+	TSLexeme   *data;	/* Lexemes produced from current token */
+} LexemesBufferEntry;
+
+/*
+ * Buffer with lexemes produced from current token
+ */
+typedef struct LexemesBuffer
+{
+	int			size;
+	LexemesBufferEntry *data;
+} LexemesBuffer;
+
+/*
+ * Storage for accepted and possible accepted lexemes
+ */
+typedef struct ResultStorage
+{
+	TSLexeme   *lexemes;		/* Processed lexemes, which is not yet
+								 * accepted */
+	TSLexeme   *accepted;		/* Already accepted lexemes */
+} ResultStorage;
+
+/*
+ * FTS processing context
+ */
+typedef struct LexizeData
+{
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	DictStateList dslist;		/* List of all currently stored states of
+								 * dictionaries */
+	ListParsedLex towork;		/* Current list to work */
+	ListParsedLex waste;		/* List of lexemes that already lexized */
+	LexemesBuffer buffer;		/* Buffer of processed lexemes. Used to avoid
+								 * multiple execution of token lexize process
+								 * with same parameters */
+	ResultStorage delayedResults;	/* Results that should be returned but may
+									 * be rejected in future */
+	Oid			skipDictionary; /* The dictionary we should skip during
+								 * processing. Used to avoid infinite loop in
+								 * configuration with phrase dictionary */
+	bool		debugContext;	/* If true, relatedRule attribute is filled */
 } LexizeData;
 
-static void
-LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+/*
+ * FTS processing debug context. Used during ts_debug calls.
+ */
+typedef struct TSDebugContext
 {
-	ld->cfg = cfg;
-	ld->curDictId = InvalidOid;
-	ld->posDict = 0;
-	ld->towork.head = ld->towork.tail = ld->curSub = NULL;
-	ld->waste.head = ld->waste.tail = NULL;
-	ld->lastRes = NULL;
-	ld->tmpRes = NULL;
-}
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	TSParserCacheEntry *prsobj; /* Parser context of current ts_debug context */
+	LexDescr   *tokenTypes;		/* Token types supported by current parser */
+	void	   *prsdata;		/* Parser data of current ts_debug context */
+	LexizeData	ldata;			/* Lexize data of current ts_debug context */
+	int			tokentype;		/* Last token tokentype */
+	TSLexeme   *savedLexemes;	/* Last token lexemes stored for ts_debug
+								 * output */
+	ParsedLex  *leftTokens;		/* Corresponded ParsedLex */
+} TSDebugContext;
+
+static TSLexeme *TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression);
+static TSLexeme *LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config);
+
+/*-------------------
+ * ListParsedLex API
+ *-------------------
+ */
 
+/*
+ * Add a ParsedLex to the end of the list
+ */
 static void
 LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 {
@@ -81,274 +180,1291 @@ LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 	newpl->next = NULL;
 }
 
-static ParsedLex *
-LPLRemoveHead(ListParsedLex *list)
-{
-	ParsedLex  *res = list->head;
+/*
+ * Add a copy of ParsedLex to the end of the list
+ */
+static void
+LPLAddTailCopy(ListParsedLex *list, ParsedLex *newpl)
+{
+	ParsedLex  *copy = palloc0(sizeof(ParsedLex));
+
+	copy->lenlemm = newpl->lenlemm;
+	copy->type = newpl->type;
+	copy->lemm = newpl->lemm;
+	copy->relatedRule = newpl->relatedRule;
+	copy->next = NULL;
+
+	if (list->tail)
+	{
+		list->tail->next = copy;
+		list->tail = copy;
+	}
+	else
+		list->head = list->tail = copy;
+}
+
+/*
+ * Remove the head of the list. Return pointer to detached head
+ */
+static ParsedLex *
+LPLRemoveHead(ListParsedLex *list)
+{
+	ParsedLex  *res = list->head;
+
+	if (list->head)
+		list->head = list->head->next;
+
+	if (list->head == NULL)
+		list->tail = NULL;
+
+	return res;
+}
+
+/*
+ * Remove all ParsedLex from the list
+ */
+static void
+LPLClear(ListParsedLex *list)
+{
+	ParsedLex  *tmp,
+			   *ptr = list->head;
+
+	while (ptr)
+	{
+		tmp = ptr->next;
+		pfree(ptr);
+		ptr = tmp;
+	}
+
+	list->head = list->tail = NULL;
+}
+
+/*-------------------
+ * LexizeData manipulation functions
+ *-------------------
+ */
+
+/*
+ * Initialize empty LexizeData object
+ */
+static void
+LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+{
+	ld->cfg = cfg;
+	ld->skipDictionary = InvalidOid;
+	ld->towork.head = ld->towork.tail = NULL;
+	ld->waste.head = ld->waste.tail = NULL;
+	ld->dslist.listLength = 0;
+	ld->dslist.states = NULL;
+	ld->buffer.size = 0;
+	ld->buffer.data = NULL;
+	ld->delayedResults.lexemes = NULL;
+	ld->delayedResults.accepted = NULL;
+}
+
+/*
+ * Add a token to the processing queue
+ */
+static void
+LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
+{
+	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+
+	newpl->type = type;
+	newpl->lemm = lemm;
+	newpl->lenlemm = lenlemm;
+	newpl->relatedRule = NULL;
+	LPLAddTail(&ld->towork, newpl);
+}
+
+/*
+ * Remove head of the processing queue
+ */
+static void
+RemoveHead(LexizeData *ld)
+{
+	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+}
+
+/*
+ * Set token corresponded to current lexeme
+ */
+static void
+setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+{
+	if (correspondLexem)
+		*correspondLexem = ld->waste.head;
+	else
+		LPLClear(&ld->waste);
+
+	ld->waste.head = ld->waste.tail = NULL;
+}
+
+/*-------------------
+ * DictState manipulation functions
+ *-------------------
+ */
+
+/*
+ * Get a state of dictionary based on its OID
+ */
+static DictState *
+DictStateListGet(DictStateList *list, Oid dictId)
+{
+	int			i;
+	DictState  *result = NULL;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			result = &list->states[i];
+
+	return result;
+}
+
+/*
+ * Remove a state of dictionary based on its OID
+ */
+static void
+DictStateListRemove(DictStateList *list, Oid dictId)
+{
+	int			i;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			break;
+
+	if (i != list->listLength)
+	{
+		memcpy(list->states + i, list->states + i + 1, sizeof(DictState) * (list->listLength - i - 1));
+		list->listLength--;
+		if (list->listLength == 0)
+			list->states = NULL;
+		else
+			list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	}
+}
+
+/*
+ * Insert a state of dictionary with specified OID
+ */
+static DictState *
+DictStateListAdd(DictStateList *list, DictState *state)
+{
+	DictStateListRemove(list, state->relatedDictionary);
+
+	list->listLength++;
+	if (list->states)
+		list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	else
+		list->states = palloc0(sizeof(DictState) * list->listLength);
+
+	memcpy(list->states + list->listLength - 1, state, sizeof(DictState));
+
+	return list->states + list->listLength - 1;
+}
+
+/*
+ * Remove states of all dictionaries
+ */
+static void
+DictStateListClear(DictStateList *list)
+{
+	list->listLength = 0;
+	if (list->states)
+		pfree(list->states);
+	list->states = NULL;
+}
+
+/*-------------------
+ * LexemesBuffer manipulation functions
+ *-------------------
+ */
+
+/*
+ * Check if there is a saved lexeme generated by specified TSMapElement
+ */
+static bool
+LexemesBufferContains(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			return true;
+
+	return false;
+}
+
+/*
+ * Get a saved lexeme generated by specified TSMapElement
+ */
+static TSLexeme *
+LexemesBufferGet(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+	TSLexeme   *result = NULL;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			result = buffer->data[i].data;
+
+	return result;
+}
+
+/*
+ * Remove a saved lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferRemove(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			break;
+
+	if (i != buffer->size)
+	{
+		memcpy(buffer->data + i, buffer->data + i + 1, sizeof(LexemesBufferEntry) * (buffer->size - i - 1));
+		buffer->size--;
+		if (buffer->size == 0)
+			buffer->data = NULL;
+		else
+			buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	}
+}
+
+/*
+ * Same a lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferAdd(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token, TSLexeme *data)
+{
+	LexemesBufferRemove(buffer, key, token);
+
+	buffer->size++;
+	if (buffer->data)
+		buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	else
+		buffer->data = palloc0(sizeof(LexemesBufferEntry) * buffer->size);
+
+	buffer->data[buffer->size - 1].token = token;
+	buffer->data[buffer->size - 1].key = key;
+	buffer->data[buffer->size - 1].data = data;
+}
+
+/*
+ * Remove all lexemes saved in a buffer
+ */
+static void
+LexemesBufferClear(LexemesBuffer *buffer)
+{
+	int			i;
+	bool	   *skipEntry = palloc0(sizeof(bool) * buffer->size);
+
+	for (i = 0; i < buffer->size; i++)
+	{
+		if (buffer->data[i].data != NULL && !skipEntry[i])
+		{
+			int			j;
+
+			for (j = 0; j < buffer->size; j++)
+				if (buffer->data[i].data == buffer->data[j].data)
+					skipEntry[j] = true;
+
+			pfree(buffer->data[i].data);
+		}
+	}
+
+	buffer->size = 0;
+	if (buffer->data)
+		pfree(buffer->data);
+	buffer->data = NULL;
+}
+
+/*-------------------
+ * TSLexeme util functions
+ *-------------------
+ */
+
+/*
+ * Get size of TSLexeme except empty-lexeme
+ */
+static int
+TSLexemeGetSize(TSLexeme *lex)
+{
+	int			result = 0;
+	TSLexeme   *ptr = lex;
+
+	while (ptr && ptr->lexeme)
+	{
+		result++;
+		ptr++;
+	}
+
+	return result;
+}
+
+/*
+ * Remove repeated lexemes. Also remove copies of whole nvariant groups.
+ */
+static TSLexeme *
+TSLexemeRemoveDuplications(TSLexeme *lexeme)
+{
+	TSLexeme   *res;
+	int			curLexIndex;
+	int			i;
+	int			lexemeSize = TSLexemeGetSize(lexeme);
+	int			shouldCopyCount = lexemeSize;
+	bool	   *shouldCopy;
+
+	if (lexeme == NULL)
+		return NULL;
+
+	shouldCopy = palloc(sizeof(bool) * lexemeSize);
+	memset(shouldCopy, true, sizeof(bool) * lexemeSize);
+
+	for (curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		for (i = curLexIndex + 1; i < lexemeSize; i++)
+		{
+			if (!shouldCopy[i])
+				continue;
+
+			if (strcmp(lexeme[curLexIndex].lexeme, lexeme[i].lexeme) == 0)
+			{
+				if (lexeme[curLexIndex].nvariant == lexeme[i].nvariant)
+				{
+					shouldCopy[i] = false;
+					shouldCopyCount--;
+					continue;
+				}
+				else
+				{
+					/*
+					 * Check for same set of lexemes in another nvariant
+					 * series
+					 */
+					int			nvariantCountL = 0;
+					int			nvariantCountR = 0;
+					int			nvariantOverlap = 1;
+					int			j;
+
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[curLexIndex].nvariant == lexeme[j].nvariant)
+							nvariantCountL++;
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[i].nvariant == lexeme[j].nvariant)
+							nvariantCountR++;
+
+					if (nvariantCountL != nvariantCountR)
+						continue;
+
+					for (j = 1; j < nvariantCountR; j++)
+					{
+						if (strcmp(lexeme[curLexIndex + j].lexeme, lexeme[i + j].lexeme) == 0
+							&& lexeme[curLexIndex + j].nvariant == lexeme[i + j].nvariant)
+							nvariantOverlap++;
+					}
+
+					if (nvariantOverlap != nvariantCountR)
+						continue;
+
+					for (j = 0; j < nvariantCountR; j++)
+						shouldCopy[i + j] = false;
+				}
+			}
+		}
+	}
+
+	res = palloc0(sizeof(TSLexeme) * (shouldCopyCount + 1));
+
+	for (i = 0, curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		if (shouldCopy[curLexIndex])
+		{
+			memcpy(res + i, lexeme + curLexIndex, sizeof(TSLexeme));
+			i++;
+		}
+	}
+
+	pfree(shouldCopy);
+	return res;
+}
+
+/*
+ * Combine two lexeme lists with respect to positions
+ */
+static TSLexeme *
+TSLexemeMergePositions(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+
+	if (left != NULL || right != NULL)
+	{
+		int			left_i = 0;
+		int			right_i = 0;
+		int			left_max_nvariant = 0;
+		int			i;
+		int			left_size = TSLexemeGetSize(left);
+		int			right_size = TSLexemeGetSize(right);
+
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		for (i = 0; i < right_size; i++)
+			right[i].nvariant += left_max_nvariant;
+		if (right && right[0].flags & TSL_ADDPOS)
+			right[0].flags &= ~TSL_ADDPOS;
+
+		i = 0;
+		while (i < left_size + right_size)
+		{
+			if (left_i < left_size)
+			{
+				do
+				{
+					result[i++] = left[left_i++];
+				} while (left && left[left_i].lexeme && (left[left_i].flags & TSL_ADDPOS) == 0);
+			}
+
+			if (right_i < right_size)
+			{
+				do
+				{
+					result[i++] = right[right_i++];
+				} while (right && right[right_i].lexeme && (right[right_i].flags & TSL_ADDPOS) == 0);
+			}
+		}
+	}
+	return result;
+}
+
+/*
+ * Split lexemes generated by regular dictionaries and multi-input dictionaries
+ * and combine them with respect to positions
+ */
+static TSLexeme *
+TSLexemeFilterMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *result;
+	TSLexeme   *ptr = lexemes;
+	int			multi_lexemes = 0;
+
+	while (ptr && ptr->lexeme)
+	{
+		if (ptr->flags & TSL_MULTI)
+			multi_lexemes++;
+		ptr++;
+	}
+
+	if (multi_lexemes > 0)
+	{
+		TSLexeme   *lexemes_multi = palloc0(sizeof(TSLexeme) * (multi_lexemes + 1));
+		TSLexeme   *lexemes_rest = palloc0(sizeof(TSLexeme) * (TSLexemeGetSize(lexemes) - multi_lexemes + 1));
+		int			rest_i = 0;
+		int			multi_i = 0;
+
+		ptr = lexemes;
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr->flags & TSL_MULTI)
+				lexemes_multi[multi_i++] = *ptr;
+			else
+				lexemes_rest[rest_i++] = *ptr;
+
+			ptr++;
+		}
+		result = TSLexemeMergePositions(lexemes_rest, lexemes_multi);
+	}
+	else
+	{
+		result = TSLexemeMergePositions(lexemes, NULL);
+	}
+
+	return result;
+}
+
+/*
+ * Mark lexemes as generated by multi-input (thesaurus-like) dictionary
+ */
+static void
+TSLexemeMarkMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *ptr = lexemes;
+
+	while (ptr && ptr->lexeme)
+	{
+		ptr->flags |= TSL_MULTI;
+		ptr++;
+	}
+}
+
+/*-------------------
+ * Lexemes set operations
+ *-------------------
+ */
+
+/*
+ * Combine left and right lexeme lists into one.
+ * If append is true, right lexemes added after last left lexeme with TSL_ADDPOS flag
+ */
+static TSLexeme *
+TSLexemeUnionOpt(TSLexeme *left, TSLexeme *right, bool append)
+{
+	TSLexeme   *result;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+	int			left_max_nvariant = 0;
+	int			i;
+
+	if (left == NULL && right == NULL)
+	{
+		result = NULL;
+	}
+	else
+	{
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		if (left_size > 0)
+			memcpy(result, left, sizeof(TSLexeme) * left_size);
+		if (right_size > 0)
+			memcpy(result + left_size, right, sizeof(TSLexeme) * right_size);
+		if (append && left_size > 0 && right_size > 0)
+			result[left_size].flags |= TSL_ADDPOS;
+
+		for (i = left_size; i < left_size + right_size; i++)
+			result[i].nvariant += left_max_nvariant;
+	}
+
+	return result;
+}
+
+/*
+ * Combine left and right lexeme lists into one
+ */
+static TSLexeme *
+TSLexemeUnion(TSLexeme *left, TSLexeme *right)
+{
+	return TSLexemeUnionOpt(left, right, false);
+}
+
+/*
+ * Remove common lexemes and return only which is stored in left list
+ */
+static TSLexeme *
+TSLexemeExcept(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (!found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*
+ * Keep only common lexemes
+ */
+static TSLexeme *
+TSLexemeIntersect(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*-------------------
+ * Result storage functions
+ *-------------------
+ */
+
+/*
+ * Add a lexeme to the result storage
+ */
+static void
+ResultStorageAdd(ResultStorage *storage, ParsedLex *token, TSLexeme *lexs)
+{
+	TSLexeme   *oldLexs = storage->lexemes;
+
+	storage->lexemes = TSLexemeUnionOpt(storage->lexemes, lexs, true);
+	if (oldLexs)
+		pfree(oldLexs);
+}
+
+/*
+ * Move all saved lexemes to accepted list
+ */
+static void
+ResultStorageMoveToAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+	{
+		TSLexeme   *prevAccepted = storage->accepted;
+
+		storage->accepted = TSLexemeUnionOpt(storage->accepted, storage->lexemes, true);
+		if (prevAccepted)
+			pfree(prevAccepted);
+		if (storage->lexemes)
+			pfree(storage->lexemes);
+	}
+	else
+	{
+		storage->accepted = storage->lexemes;
+	}
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all non-accepted lexemes
+ */
+static void
+ResultStorageClearLexemes(ResultStorage *storage)
+{
+	if (storage->lexemes)
+		pfree(storage->lexemes);
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all accepted lexemes
+ */
+static void
+ResultStorageClearAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+		pfree(storage->accepted);
+	storage->accepted = NULL;
+}
+
+/*-------------------
+ * Condition and command execution
+ *-------------------
+ */
+
+/*
+ * Process a token by the dictionary
+ */
+static TSLexeme *
+LexizeExecDictionary(LexizeData *ld, ParsedLex *token, TSMapElement *dictionary)
+{
+	TSLexeme   *res;
+	TSDictionaryCacheEntry *dict;
+	DictSubState subState;
+	Oid			dictId = dictionary->value.objectDictionary;
+
+	if (ld->skipDictionary == dictId)
+		return NULL;
+
+	if (LexemesBufferContains(&ld->buffer, dictionary, token))
+		res = LexemesBufferGet(&ld->buffer, dictionary, token);
+	else
+	{
+		char	   *curValLemm = token->lemm;
+		int			curValLenLemm = token->lenlemm;
+		DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+		dict = lookup_ts_dictionary_cache(dictId);
+
+		if (state)
+		{
+			subState = state->subState;
+			state->processed = true;
+		}
+		else
+		{
+			subState.isend = subState.getnext = false;
+			subState.private_state = NULL;
+		}
+
+		res = (TSLexeme *) DatumGetPointer(FunctionCall4(&(dict->lexize),
+														 PointerGetDatum(dict->dictData),
+														 PointerGetDatum(curValLemm),
+														 Int32GetDatum(curValLenLemm),
+														 PointerGetDatum(&subState)
+														 ));
+
+		if (subState.getnext)
+		{
+			/*
+			 * Dictionary wants next word, so store current context and state
+			 * in the DictStateList
+			 */
+			if (state == NULL)
+			{
+				state = palloc0(sizeof(DictState));
+				state->processed = true;
+				state->relatedDictionary = dictId;
+				state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				state->acceptedTokens.head = state->acceptedTokens.tail = NULL;
+				state->tmpResult = NULL;
+
+				/*
+				 * Add state to the list and update pointer in order to work
+				 * with copy from the list
+				 */
+				state = DictStateListAdd(&ld->dslist, state);
+			}
+
+			state->subState = subState;
+			state->storeToAccepted = res != NULL;
+
+			if (res)
+			{
+				if (state->intermediateTokens.head != NULL)
+				{
+					ParsedLex  *ptr = state->intermediateTokens.head;
+
+					while (ptr)
+					{
+						LPLAddTailCopy(&state->acceptedTokens, ptr);
+						ptr = ptr->next;
+					}
+					state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				}
+
+				if (state->tmpResult)
+					pfree(state->tmpResult);
+				TSLexemeMarkMulti(res);
+				state->tmpResult = res;
+				res = NULL;
+			}
+		}
+		else if (state != NULL)
+		{
+			if (res)
+			{
+				if (state)
+					TSLexemeMarkMulti(res);
+				DictStateListRemove(&ld->dslist, dictId);
+			}
+			else
+			{
+				/*
+				 * Trigger post-processing in order to check tmpResult and
+				 * restart processing (see LexizeExec function)
+				 */
+				state->processed = false;
+			}
+		}
+		LexemesBufferAdd(&ld->buffer, dictionary, token, res);
+	}
+
+	return res;
+}
+
+/*
+ * Check is dictionary waits for more tokens or not
+ */
+static bool
+LexizeExecDictionaryWaitNext(LexizeData *ld, Oid dictId)
+{
+	DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+	if (state)
+		return state->subState.getnext;
+	else
+		return false;
+}
+
+/*
+ * Check is dictionary result for current token is NULL or not.
+ * It dictionary waits for more lexemes, the result is interpreted as not null.
+ */
+static bool
+LexizeExecIsNull(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	bool		result = false;
+
+	if (config->type == TSMAP_EXPRESSION)
+	{
+		TSMapExpression *expression = config->value.objectExpression;
+
+		result = LexizeExecIsNull(ld, token, expression->left) || LexizeExecIsNull(ld, token, expression->right);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		Oid			dictOid = config->value.objectDictionary;
+		TSLexeme   *lexemes = LexizeExecDictionary(ld, token, config);
+
+		if (lexemes)
+			result = false;
+		else
+			result = !LexizeExecDictionaryWaitNext(ld, dictOid);
+	}
+	return result;
+}
+
+/*
+ * Execute a MAP operator
+ */
+static TSLexeme *
+TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression)
+{
+	TSLexeme   *left_res;
+	TSLexeme   *result = NULL;
+	int			left_size;
+	int			i;
+
+	left_res = LexizeExecTSElement(ld, token, expression->left);
+	left_size = TSLexemeGetSize(left_res);
+
+	if (left_res == NULL && LexizeExecIsNull(ld, token, expression->left))
+		result = LexizeExecTSElement(ld, token, expression->right);
+	else if (expression->operator == TSMAP_OP_COMMA &&
+			((left_res != NULL && (left_res->flags & TSL_FILTER) == 0) || left_res == NULL))
+		result = left_res;
+	else
+	{
+		TSMapElement *relatedRuleTmp = NULL;
+		relatedRuleTmp = palloc0(sizeof(TSMapElement));
+		relatedRuleTmp->parent = NULL;
+		relatedRuleTmp->type = TSMAP_EXPRESSION;
+		relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+		relatedRuleTmp->value.objectExpression->operator = expression->operator;
+		relatedRuleTmp->value.objectExpression->left = token->relatedRule;
+
+		for (i = 0; i < left_size; i++)
+		{
+			TSLexeme   *tmp_res = NULL;
+			TSLexeme   *prev_res;
+			ParsedLex	tmp_token;
+
+			tmp_token.lemm = left_res[i].lexeme;
+			tmp_token.lenlemm = strlen(left_res[i].lexeme);
+			tmp_token.type = token->type;
+			tmp_token.next = NULL;
+
+			tmp_res = LexizeExecTSElement(ld, &tmp_token, expression->right);
+			relatedRuleTmp->value.objectExpression->right = tmp_token.relatedRule;
+			prev_res = result;
+			result = TSLexemeUnion(prev_res, tmp_res);
+			if (prev_res)
+				pfree(prev_res);
+		}
+		token->relatedRule = relatedRuleTmp;
+	}
+
+	return result;
+}
+
+/*
+ * Execute a TSMapElement
+ * Common point of all possible types of TSMapElement
+ */
+static TSLexeme *
+LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	TSLexeme   *result = NULL;
+
+	if (LexemesBufferContains(&ld->buffer, config, token))
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexemesBufferGet(&ld->buffer, config, token);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexizeExecDictionary(ld, token, config);
+	}
+	else if (config->type == TSMAP_CASE)
+	{
+		TSMapCase  *caseObject = config->value.objectCase;
+		bool		conditionIsNull = LexizeExecIsNull(ld, token, caseObject->condition);
+
+		if ((!conditionIsNull && caseObject->match) || (conditionIsNull && !caseObject->match))
+		{
+			if (caseObject->command->type == TSMAP_KEEP)
+				result = LexizeExecTSElement(ld, token, caseObject->condition);
+			else
+				result = LexizeExecTSElement(ld, token, caseObject->command);
+		}
+		else if (caseObject->elsebranch)
+			result = LexizeExecTSElement(ld, token, caseObject->elsebranch);
+	}
+	else if (config->type == TSMAP_EXPRESSION)
+	{
+		TSLexeme   *resLeft = NULL;
+		TSLexeme   *resRight = NULL;
+		TSMapElement *relatedRuleTmp = NULL;
+		TSMapExpression *expression = config->value.objectExpression;
+
+		if (expression->operator != TSMAP_OP_MAP && expression->operator != TSMAP_OP_COMMA)
+		{
+			if (ld->debugContext)
+			{
+				relatedRuleTmp = palloc0(sizeof(TSMapElement));
+				relatedRuleTmp->parent = NULL;
+				relatedRuleTmp->type = TSMAP_EXPRESSION;
+				relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+				relatedRuleTmp->value.objectExpression->operator = expression->operator;
+			}
 
-	if (list->head)
-		list->head = list->head->next;
+			resLeft = LexizeExecTSElement(ld, token, expression->left);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->left = token->relatedRule;
 
-	if (list->head == NULL)
-		list->tail = NULL;
+			resRight = LexizeExecTSElement(ld, token, expression->right);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->right = token->relatedRule;
+		}
 
-	return res;
-}
+		switch (expression->operator)
+		{
+			case TSMAP_OP_UNION:
+				result = TSLexemeUnion(resLeft, resRight);
+				break;
+			case TSMAP_OP_EXCEPT:
+				result = TSLexemeExcept(resLeft, resRight);
+				break;
+			case TSMAP_OP_INTERSECT:
+				result = TSLexemeIntersect(resLeft, resRight);
+				break;
+			case TSMAP_OP_MAP:
+			case TSMAP_OP_COMMA:
+				result = TSLexemeMap(ld, token, expression);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains invalid expression operator.")));
+				break;
+		}
 
-static void
-LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
-{
-	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+		if (ld->debugContext && relatedRuleTmp != NULL)
+			token->relatedRule = relatedRuleTmp;
+	}
 
-	newpl->type = type;
-	newpl->lemm = lemm;
-	newpl->lenlemm = lenlemm;
-	LPLAddTail(&ld->towork, newpl);
-	ld->curSub = ld->towork.tail;
+	if (!LexemesBufferContains(&ld->buffer, config, token))
+		LexemesBufferAdd(&ld->buffer, config, token, result);
+
+	return result;
 }
 
-static void
-RemoveHead(LexizeData *ld)
+/*-------------------
+ * LexizeExec and helpers functions
+ *-------------------
+ */
+
+/*
+ * Processing of EOF-like token.
+ * Return all temporary results if any are saved.
+ */
+static TSLexeme *
+LexizeExecFinishProcessing(LexizeData *ld)
 {
-	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+	int			i;
+	TSLexeme   *res = NULL;
+
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		TSLexeme   *last_res = res;
 
-	ld->posDict = 0;
+		res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+		if (last_res)
+			pfree(last_res);
+	}
+
+	return res;
 }
 
-static void
-setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+/*
+ * Get last accepted result of the phrase-dictionary
+ */
+static TSLexeme *
+LexizeExecGetPreviousResults(LexizeData *ld)
 {
-	if (correspondLexem)
-	{
-		*correspondLexem = ld->waste.head;
-	}
-	else
-	{
-		ParsedLex  *tmp,
-				   *ptr = ld->waste.head;
+	int			i;
+	TSLexeme   *res = NULL;
 
-		while (ptr)
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		if (!ld->dslist.states[i].processed)
 		{
-			tmp = ptr->next;
-			pfree(ptr);
-			ptr = tmp;
+			TSLexeme   *last_res = res;
+
+			res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+			if (last_res)
+				pfree(last_res);
 		}
 	}
-	ld->waste.head = ld->waste.tail = NULL;
+
+	return res;
 }
 
+/*
+ * Remove all dictionary states which wasn't used for current token
+ */
 static void
-moveToWaste(LexizeData *ld, ParsedLex *stop)
+LexizeExecClearDictStates(LexizeData *ld)
 {
-	bool		go = true;
+	int			i;
 
-	while (ld->towork.head && go)
+	for (i = 0; i < ld->dslist.listLength; i++)
 	{
-		if (ld->towork.head == stop)
+		if (!ld->dslist.states[i].processed)
 		{
-			ld->curSub = stop->next;
-			go = false;
+			DictStateListRemove(&ld->dslist, ld->dslist.states[i].relatedDictionary);
+			i = 0;
 		}
-		RemoveHead(ld);
 	}
 }
 
-static void
-setNewTmpRes(LexizeData *ld, ParsedLex *lex, TSLexeme *res)
+/*
+ * Check if there are any dictionaries that didn't processed current token
+ */
+static bool
+LexizeExecNotProcessedDictStates(LexizeData *ld)
 {
-	if (ld->tmpRes)
-	{
-		TSLexeme   *ptr;
+	int			i;
 
-		for (ptr = ld->tmpRes; ptr->lexeme; ptr++)
-			pfree(ptr->lexeme);
-		pfree(ld->tmpRes);
-	}
-	ld->tmpRes = res;
-	ld->lastRes = lex;
+	for (i = 0; i < ld->dslist.listLength; i++)
+		if (!ld->dslist.states[i].processed)
+			return true;
+
+	return false;
 }
 
+/*
+ * Do a lexize processing for a towork queue in LexizeData
+ */
 static TSLexeme *
 LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 {
+	ParsedLex  *token;
+	TSMapElement *config;
+	TSLexeme   *res = NULL;
+	TSLexeme   *prevIterationResult = NULL;
+	bool		removeHead = false;
+	bool		resetSkipDictionary = false;
+	bool		accepted = false;
 	int			i;
-	ListDictionary *map;
-	TSDictionaryCacheEntry *dict;
-	TSLexeme   *res;
 
-	if (ld->curDictId == InvalidOid)
+	for (i = 0; i < ld->dslist.listLength; i++)
+		ld->dslist.states[i].processed = false;
+	if (ld->skipDictionary != InvalidOid)
+		resetSkipDictionary = true;
+
+	token = ld->towork.head;
+	if (token == NULL)
 	{
-		/*
-		 * usual mode: dictionary wants only one word, but we should keep in
-		 * mind that we should go through all stack
-		 */
+		setCorrLex(ld, correspondLexem);
+		return NULL;
+	}
 
-		while (ld->towork.head)
+	if (token->type >= ld->cfg->lenmap)
+	{
+		removeHead = true;
+	}
+	else
+	{
+		config = ld->cfg->map[token->type];
+		if (config != NULL)
+		{
+			res = LexizeExecTSElement(ld, token, config);
+			prevIterationResult = LexizeExecGetPreviousResults(ld);
+			removeHead = prevIterationResult == NULL;
+		}
+		else
 		{
-			ParsedLex  *curVal = ld->towork.head;
-			char	   *curValLemm = curVal->lemm;
-			int			curValLenLemm = curVal->lenlemm;
+			removeHead = true;
+			if (token->type == 0)	/* Processing EOF-like token */
+			{
+				res = LexizeExecFinishProcessing(ld);
+				prevIterationResult = NULL;
+			}
+		}
 
-			map = ld->cfg->map + curVal->type;
+		if (LexizeExecNotProcessedDictStates(ld) && (token->type == 0 || config != NULL))	/* Rollback processing */
+		{
+			int			i;
+			ListParsedLex *intermediateTokens = NULL;
+			ListParsedLex *acceptedTokens = NULL;
 
-			if (curVal->type == 0 || curVal->type >= ld->cfg->lenmap || map->len == 0)
+			for (i = 0; i < ld->dslist.listLength; i++)
 			{
-				/* skip this type of lexeme */
-				RemoveHead(ld);
-				continue;
+				if (!ld->dslist.states[i].processed)
+				{
+					intermediateTokens = &ld->dslist.states[i].intermediateTokens;
+					acceptedTokens = &ld->dslist.states[i].acceptedTokens;
+					if (prevIterationResult == NULL)
+						ld->skipDictionary = ld->dslist.states[i].relatedDictionary;
+				}
 			}
 
-			for (i = ld->posDict; i < map->len; i++)
+			if (intermediateTokens && intermediateTokens->head)
 			{
-				dict = lookup_ts_dictionary_cache(map->dictIds[i]);
-
-				ld->dictState.isend = ld->dictState.getnext = false;
-				ld->dictState.private_state = NULL;
-				res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-																 &(dict->lexize),
-																 PointerGetDatum(dict->dictData),
-																 PointerGetDatum(curValLemm),
-																 Int32GetDatum(curValLenLemm),
-																 PointerGetDatum(&ld->dictState)
-																 ));
-
-				if (ld->dictState.getnext)
+				ParsedLex  *head = ld->towork.head;
+
+				ld->towork.head = intermediateTokens->head;
+				intermediateTokens->tail->next = head;
+				head->next = NULL;
+				ld->towork.tail = head;
+				removeHead = false;
+				LPLClear(&ld->waste);
+				if (acceptedTokens && acceptedTokens->head)
 				{
-					/*
-					 * dictionary wants next word, so setup and store current
-					 * position and go to multiword mode
-					 */
-
-					ld->curDictId = DatumGetObjectId(map->dictIds[i]);
-					ld->posDict = i + 1;
-					ld->curSub = curVal->next;
-					if (res)
-						setNewTmpRes(ld, curVal, res);
-					return LexizeExec(ld, correspondLexem);
+					ld->waste.head = acceptedTokens->head;
+					ld->waste.tail = acceptedTokens->tail;
 				}
+			}
+			ResultStorageClearLexemes(&ld->delayedResults);
+			if (config != NULL)
+				res = NULL;
+		}
 
-				if (!res)		/* dictionary doesn't know this lexeme */
-					continue;
+		if (config != NULL)
+			LexizeExecClearDictStates(ld);
+		else if (token->type == 0)
+			DictStateListClear(&ld->dslist);
+	}
 
-				if (res->flags & TSL_FILTER)
-				{
-					curValLemm = res->lexeme;
-					curValLenLemm = strlen(res->lexeme);
-					continue;
-				}
+	if (prevIterationResult)
+		res = prevIterationResult;
+	else
+	{
+		int			i;
 
-				RemoveHead(ld);
-				setCorrLex(ld, correspondLexem);
-				return res;
+		for (i = 0; i < ld->dslist.listLength; i++)
+		{
+			if (ld->dslist.states[i].storeToAccepted)
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].acceptedTokens, token);
+				accepted = true;
+				ld->dslist.states[i].storeToAccepted = false;
+			}
+			else
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].intermediateTokens, token);
 			}
-
-			RemoveHead(ld);
 		}
 	}
-	else
-	{							/* curDictId is valid */
-		dict = lookup_ts_dictionary_cache(ld->curDictId);
 
+	if (removeHead)
+		RemoveHead(ld);
+
+	if (ld->dslist.listLength > 0)
+	{
 		/*
-		 * Dictionary ld->curDictId asks  us about following words
+		 * There is at least one thesaurus dictionary in the middle of
+		 * processing. Delay return of the result to avoid wrong lexemes in
+		 * case of thesaurus phrase rejection.
 		 */
+		ResultStorageAdd(&ld->delayedResults, token, res);
+		if (accepted)
+			ResultStorageMoveToAccepted(&ld->delayedResults);
 
-		while (ld->curSub)
+		/*
+		 * Current value of res should not be cleared, because it is stored in
+		 * LexemesBuffer
+		 */
+		res = NULL;
+	}
+	else
+	{
+		if (ld->towork.head == NULL)
 		{
-			ParsedLex  *curVal = ld->curSub;
-
-			map = ld->cfg->map + curVal->type;
-
-			if (curVal->type != 0)
-			{
-				bool		dictExists = false;
-
-				if (curVal->type >= ld->cfg->lenmap || map->len == 0)
-				{
-					/* skip this type of lexeme */
-					ld->curSub = curVal->next;
-					continue;
-				}
+			TSLexeme   *oldAccepted = ld->delayedResults.accepted;
 
-				/*
-				 * We should be sure that current type of lexeme is recognized
-				 * by our dictionary: we just check is it exist in list of
-				 * dictionaries ?
-				 */
-				for (i = 0; i < map->len && !dictExists; i++)
-					if (ld->curDictId == DatumGetObjectId(map->dictIds[i]))
-						dictExists = true;
-
-				if (!dictExists)
-				{
-					/*
-					 * Dictionary can't work with current tpe of lexeme,
-					 * return to basic mode and redo all stored lexemes
-					 */
-					ld->curDictId = InvalidOid;
-					return LexizeExec(ld, correspondLexem);
-				}
-			}
+			ld->delayedResults.accepted = TSLexemeUnionOpt(ld->delayedResults.accepted, ld->delayedResults.lexemes, true);
+			if (oldAccepted)
+				pfree(oldAccepted);
+		}
 
-			ld->dictState.isend = (curVal->type == 0) ? true : false;
-			ld->dictState.getnext = false;
+		/*
+		 * Add accepted delayed results to the output of the parsing. All
+		 * lexemes returned during thesaurus phrase processing should be
+		 * returned simultaneously, since all phrase tokens are processed as
+		 * one.
+		 */
+		if (ld->delayedResults.accepted != NULL)
+		{
+			/*
+			 * Previous value of res should not be cleared, because it is
+			 * stored in LexemesBuffer
+			 */
+			res = TSLexemeUnionOpt(ld->delayedResults.accepted, res, prevIterationResult == NULL);
 
-			res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-															 &(dict->lexize),
-															 PointerGetDatum(dict->dictData),
-															 PointerGetDatum(curVal->lemm),
-															 Int32GetDatum(curVal->lenlemm),
-															 PointerGetDatum(&ld->dictState)
-															 ));
+			ResultStorageClearLexemes(&ld->delayedResults);
+			ResultStorageClearAccepted(&ld->delayedResults);
+		}
+		setCorrLex(ld, correspondLexem);
+	}
 
-			if (ld->dictState.getnext)
-			{
-				/* Dictionary wants one more */
-				ld->curSub = curVal->next;
-				if (res)
-					setNewTmpRes(ld, curVal, res);
-				continue;
-			}
+	if (resetSkipDictionary)
+		ld->skipDictionary = InvalidOid;
 
-			if (res || ld->tmpRes)
-			{
-				/*
-				 * Dictionary normalizes lexemes, so we remove from stack all
-				 * used lexemes, return to basic mode and redo end of stack
-				 * (if it exists)
-				 */
-				if (res)
-				{
-					moveToWaste(ld, ld->curSub);
-				}
-				else
-				{
-					res = ld->tmpRes;
-					moveToWaste(ld, ld->lastRes);
-				}
+	res = TSLexemeFilterMulti(res);
+	if (res)
+		res = TSLexemeRemoveDuplications(res);
 
-				/* reset to initial state */
-				ld->curDictId = InvalidOid;
-				ld->posDict = 0;
-				ld->lastRes = NULL;
-				ld->tmpRes = NULL;
-				setCorrLex(ld, correspondLexem);
-				return res;
-			}
+	/*
+	 * Copy result since it may be stored in LexemesBuffere and removed at the
+	 * next step.
+	 */
+	if (res)
+	{
+		TSLexeme   *oldRes = res;
+		int			resSize = TSLexemeGetSize(res);
 
-			/*
-			 * Dict don't want next lexem and didn't recognize anything, redo
-			 * from ld->towork.head
-			 */
-			ld->curDictId = InvalidOid;
-			return LexizeExec(ld, correspondLexem);
-		}
+		res = palloc0(sizeof(TSLexeme) * (resSize + 1));
+		memcpy(res, oldRes, sizeof(TSLexeme) * resSize);
 	}
 
-	setCorrLex(ld, correspondLexem);
-	return NULL;
+	LexemesBufferClear(&ld->buffer);
+	return res;
 }
 
+/*-------------------
+ * ts_parse API functions
+ *-------------------
+ */
+
 /*
  * Parse string and lexize words.
  *
@@ -357,7 +1473,7 @@ LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 void
 parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
@@ -375,36 +1491,42 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		while ((norms = LexizeExec(&ldata, NULL)) != NULL)
 		{
-			TSLexeme   *ptr = norms;
+			TSLexeme   *ptr;
+
+			ptr = norms;
 
 			prs->pos++;			/* set pos */
 
@@ -429,14 +1551,246 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 			}
 			pfree(norms);
 		}
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
 
+/*-------------------
+ * ts_debug and helper functions
+ *-------------------
+ */
+
+/*
+ * Free memory occupied by temporary TSMapElement
+ */
+
+static void
+ts_debug_free_rule(TSMapElement *element)
+{
+	if (element != NULL && element->type == TSMAP_EXPRESSION)
+	{
+		ts_debug_free_rule(element->value.objectExpression->left);
+		ts_debug_free_rule(element->value.objectExpression->right);
+		pfree(element->value.objectExpression);
+		pfree(element);
+	}
+}
+
+/*
+ * Initialize SRF context and text parser for ts_debug execution.
+ */
+static void
+ts_debug_init(Oid cfgId, text *inputText, FunctionCallInfo fcinfo)
+{
+	TupleDesc	tupdesc;
+	char	   *buf;
+	int			buflen;
+	FuncCallContext *funcctx;
+	MemoryContext oldcontext;
+	TSDebugContext *context;
+
+	funcctx = SRF_FIRSTCALL_INIT();
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+	buf = text_to_cstring(inputText);
+	buflen = strlen(buf);
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("function returning record called in context "
+						"that cannot accept type record")));
+
+	funcctx->user_fctx = palloc0(sizeof(TSDebugContext));
+	funcctx->attinmeta = TupleDescGetAttInMetadata(tupdesc);
+
+	context = funcctx->user_fctx;
+	context->cfg = lookup_ts_config_cache(cfgId);
+	context->prsobj = lookup_ts_parser_cache(context->cfg->prsId);
+
+	context->tokenTypes = (LexDescr *) DatumGetPointer(OidFunctionCall1(context->prsobj->lextypeOid,
+																		(Datum) 0));
+
+	context->prsdata = (void *) DatumGetPointer(FunctionCall2(&context->prsobj->prsstart,
+															  PointerGetDatum(buf),
+															  Int32GetDatum(buflen)));
+	LexizeInit(&context->ldata, context->cfg);
+	context->ldata.debugContext = true;
+	context->tokentype = 1;
+
+	MemoryContextSwitchTo(oldcontext);
+}
+
+/*
+ * Get one token from input text and add it to processing queue.
+ */
+static void
+ts_debug_get_token(FuncCallContext *funcctx)
+{
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+	int			lenlemm;
+	char	   *lemm = NULL;
+
+	context = funcctx->user_fctx;
+
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+	context->tokentype = DatumGetInt32(FunctionCall3(&(context->prsobj->prstoken),
+													 PointerGetDatum(context->prsdata),
+													 PointerGetDatum(&lemm),
+													 PointerGetDatum(&lenlemm)));
+
+	if (context->tokentype > 0 && lenlemm >= MAXSTRLEN)
+	{
+#ifdef IGNORE_LONGLEXEME
+		ereport(NOTICE,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#else
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#endif
+	}
+
+	LexizeAddLemm(&context->ldata, context->tokentype, lemm, lenlemm);
+	MemoryContextSwitchTo(oldcontext);
+}
+
 /*
+ * Parse text and print debug information, such as token type, dictionary map
+ * configuration, selected command and lexemes for each token.
+ * Arguments: regconfiguration(Oid) cfgId, text *inputText
+ */
+Datum
+ts_debug(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		Oid			cfgId = PG_GETARG_OID(0);
+		text	   *inputText = PG_GETARG_TEXT_P(1);
+
+		ts_debug_init(cfgId, inputText, fcinfo);
+	}
+
+	funcctx = SRF_PERCALL_SETUP();
+	context = funcctx->user_fctx;
+
+	while (context->tokentype > 0 && context->leftTokens == NULL)
+	{
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+		ts_debug_get_token(funcctx);
+
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	while (context->leftTokens == NULL && context->ldata.towork.head != NULL)
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+	if (context->leftTokens && context->leftTokens && context->leftTokens->type > 0)
+	{
+		HeapTuple	tuple;
+		Datum		result;
+		char	  **values;
+		ParsedLex  *lex = context->leftTokens;
+		StringInfo	str = NULL;
+		TSLexeme   *ptr;
+
+		values = palloc0(sizeof(char *) * 7);
+		str = makeStringInfo();
+		initStringInfo(str);
+
+		values[0] = context->tokenTypes[lex->type - 1].alias;
+		values[1] = context->tokenTypes[lex->type - 1].descr;
+
+		values[2] = palloc0(sizeof(char) * (lex->lenlemm + 1));
+		memcpy(values[2], lex->lemm, sizeof(char) * lex->lenlemm);
+
+		initStringInfo(str);
+		appendStringInfoChar(str, '{');
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			Oid *dictionaries = TSMapGetDictionaries(context->ldata.cfg->map[lex->type]);
+			Oid *currentDictionary = NULL;
+			for (currentDictionary = dictionaries; *currentDictionary != InvalidOid; currentDictionary++)
+			{
+				if (currentDictionary != dictionaries)
+					appendStringInfoChar(str, ',');
+
+				TSMapPrintDictName(*currentDictionary, str);
+			}
+		}
+		appendStringInfoChar(str, '}');
+		values[3] = str->data;
+
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			initStringInfo(str);
+			TSMapPrintElement(context->ldata.cfg->map[lex->type], str);
+			values[4] = str->data;
+
+			initStringInfo(str);
+			if (lex->relatedRule)
+			{
+				TSMapPrintElement(lex->relatedRule, str);
+				values[5] = str->data;
+				str = makeStringInfo();
+				initStringInfo(str);
+				ts_debug_free_rule(lex->relatedRule);
+				lex->relatedRule = NULL;
+			}
+		}
+
+		initStringInfo(str);
+		ptr = context->savedLexemes;
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '{');
+
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr != context->savedLexemes)
+				appendStringInfoString(str, ", ");
+			appendStringInfoString(str, ptr->lexeme);
+			ptr++;
+		}
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '}');
+		if (context->savedLexemes)
+			values[6] = str->data;
+		else
+			values[6] = NULL;
+
+		tuple = BuildTupleFromCStrings(funcctx->attinmeta, values);
+		result = HeapTupleGetDatum(tuple);
+
+		context->leftTokens = lex->next;
+		pfree(lex);
+		if (context->leftTokens == NULL && context->savedLexemes)
+			pfree(context->savedLexemes);
+
+		SRF_RETURN_NEXT(funcctx, result);
+	}
+
+	FunctionCall1(&(context->prsobj->prsend), PointerGetDatum(context->prsdata));
+	SRF_RETURN_DONE(funcctx);
+}
+
+/*-------------------
  * Headline framework
+ *-------------------
  */
+
 static void
 hladdword(HeadlineParsedText *prs, char *buf, int buflen, int type)
 {
@@ -532,12 +1886,12 @@ addHLParsedLex(HeadlineParsedText *prs, TSQuery query, ParsedLex *lexs, TSLexeme
 void
 hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
 	TSLexeme   *norms;
-	ParsedLex  *lexs;
+	ParsedLex  *lexs = NULL;
 	TSConfigCacheEntry *cfg;
 	TSParserCacheEntry *prsobj;
 	void	   *prsdata;
@@ -551,32 +1905,36 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		do
 		{
@@ -587,9 +1945,10 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 			}
 			else
 				addHLParsedLex(prs, query, lexs, NULL);
+			lexs = NULL;
 		} while (norms);
 
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
@@ -642,14 +2001,14 @@ generateHeadline(HeadlineParsedText *prs)
 			}
 			else if (!wrd->skip)
 			{
-				if (wrd->selected)
+				if (wrd->selected && (wrd == prs->words || !(wrd - 1)->selected))
 				{
 					memcpy(ptr, prs->startsel, prs->startsellen);
 					ptr += prs->startsellen;
 				}
 				memcpy(ptr, wrd->word, wrd->len);
 				ptr += wrd->len;
-				if (wrd->selected)
+				if (wrd->selected && ((wrd + 1 - prs->words) == prs->curwords || !(wrd + 1)->selected))
 				{
 					memcpy(ptr, prs->stopsel, prs->stopsellen);
 					ptr += prs->stopsellen;
diff --git a/src/backend/tsearch/ts_utils.c b/src/backend/tsearch/ts_utils.c
index f6e03ae..0dd846b 100644
--- a/src/backend/tsearch/ts_utils.c
+++ b/src/backend/tsearch/ts_utils.c
@@ -20,7 +20,6 @@
 #include "tsearch/ts_locale.h"
 #include "tsearch/ts_utils.h"
 
-
 /*
  * Given the base name and extension of a tsearch config file, return
  * its full path name.  The base name is assumed to be user-supplied,
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 2b38178..f251e83 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -828,11 +828,10 @@ static const struct cachedesc cacheinfo[] = {
 	},
 	{TSConfigMapRelationId,		/* TSCONFIGMAP */
 		TSConfigMapIndexId,
-		3,
+		2,
 		{
 			Anum_pg_ts_config_map_mapcfg,
 			Anum_pg_ts_config_map_maptokentype,
-			Anum_pg_ts_config_map_mapseqno,
 			0
 		},
 		2
diff --git a/src/backend/utils/cache/ts_cache.c b/src/backend/utils/cache/ts_cache.c
index 3d5c194..1ec3834 100644
--- a/src/backend/utils/cache/ts_cache.c
+++ b/src/backend/utils/cache/ts_cache.c
@@ -39,6 +39,7 @@
 #include "catalog/pg_ts_template.h"
 #include "commands/defrem.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/catcache.h"
 #include "utils/fmgroids.h"
@@ -51,13 +52,12 @@
 
 
 /*
- * MAXTOKENTYPE/MAXDICTSPERTT are arbitrary limits on the workspace size
+ * MAXTOKENTYPE is arbitrary limits on the workspace size
  * used in lookup_ts_config_cache().  We could avoid hardwiring a limit
  * by making the workspace dynamically enlargeable, but it seems unlikely
  * to be worth the trouble.
  */
-#define MAXTOKENTYPE	256
-#define MAXDICTSPERTT	100
+#define MAXTOKENTYPE		256
 
 
 static HTAB *TSParserCacheHash = NULL;
@@ -415,11 +415,10 @@ lookup_ts_config_cache(Oid cfgId)
 		ScanKeyData mapskey;
 		SysScanDesc mapscan;
 		HeapTuple	maptup;
-		ListDictionary maplists[MAXTOKENTYPE + 1];
-		Oid			mapdicts[MAXDICTSPERTT];
+		TSMapElement *mapconfigs[MAXTOKENTYPE + 1];
 		int			maxtokentype;
-		int			ndicts;
 		int			i;
+		TSMapElement *tmpConfig;
 
 		tp = SearchSysCache1(TSCONFIGOID, ObjectIdGetDatum(cfgId));
 		if (!HeapTupleIsValid(tp))
@@ -450,8 +449,8 @@ lookup_ts_config_cache(Oid cfgId)
 			if (entry->map)
 			{
 				for (i = 0; i < entry->lenmap; i++)
-					if (entry->map[i].dictIds)
-						pfree(entry->map[i].dictIds);
+					if (entry->map[i])
+						TSMapElementFree(entry->map[i]);
 				pfree(entry->map);
 			}
 		}
@@ -465,13 +464,11 @@ lookup_ts_config_cache(Oid cfgId)
 		/*
 		 * Scan pg_ts_config_map to gather dictionary list for each token type
 		 *
-		 * Because the index is on (mapcfg, maptokentype, mapseqno), we will
-		 * see the entries in maptokentype order, and in mapseqno order for
-		 * each token type, even though we didn't explicitly ask for that.
+		 * Because the index is on (mapcfg, maptokentype), we will see the
+		 * entries in maptokentype order even though we didn't explicitly ask
+		 * for that.
 		 */
-		MemSet(maplists, 0, sizeof(maplists));
 		maxtokentype = 0;
-		ndicts = 0;
 
 		ScanKeyInit(&mapskey,
 					Anum_pg_ts_config_map_mapcfg,
@@ -483,6 +480,7 @@ lookup_ts_config_cache(Oid cfgId)
 		mapscan = systable_beginscan_ordered(maprel, mapidx,
 											 NULL, 1, &mapskey);
 
+		memset(mapconfigs, 0, sizeof(mapconfigs));
 		while ((maptup = systable_getnext_ordered(mapscan, ForwardScanDirection)) != NULL)
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
@@ -492,51 +490,27 @@ lookup_ts_config_cache(Oid cfgId)
 				elog(ERROR, "maptokentype value %d is out of range", toktype);
 			if (toktype < maxtokentype)
 				elog(ERROR, "maptokentype entries are out of order");
-			if (toktype > maxtokentype)
-			{
-				/* starting a new token type, but first save the prior data */
-				if (ndicts > 0)
-				{
-					maplists[maxtokentype].len = ndicts;
-					maplists[maxtokentype].dictIds = (Oid *)
-						MemoryContextAlloc(CacheMemoryContext,
-										   sizeof(Oid) * ndicts);
-					memcpy(maplists[maxtokentype].dictIds, mapdicts,
-						   sizeof(Oid) * ndicts);
-				}
-				maxtokentype = toktype;
-				mapdicts[0] = cfgmap->mapdict;
-				ndicts = 1;
-			}
-			else
-			{
-				/* continuing data for current token type */
-				if (ndicts >= MAXDICTSPERTT)
-					elog(ERROR, "too many pg_ts_config_map entries for one token type");
-				mapdicts[ndicts++] = cfgmap->mapdict;
-			}
+
+			maxtokentype = toktype;
+			tmpConfig = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			mapconfigs[maxtokentype] = TSMapMoveToMemoryContext(tmpConfig, CacheMemoryContext);
+			TSMapElementFree(tmpConfig);
+			tmpConfig = NULL;
 		}
 
 		systable_endscan_ordered(mapscan);
 		index_close(mapidx, AccessShareLock);
 		heap_close(maprel, AccessShareLock);
 
-		if (ndicts > 0)
+		if (maxtokentype > 0)
 		{
-			/* save the last token type's dictionaries */
-			maplists[maxtokentype].len = ndicts;
-			maplists[maxtokentype].dictIds = (Oid *)
-				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(Oid) * ndicts);
-			memcpy(maplists[maxtokentype].dictIds, mapdicts,
-				   sizeof(Oid) * ndicts);
-			/* and save the overall map */
+			/* save the overall map */
 			entry->lenmap = maxtokentype + 1;
-			entry->map = (ListDictionary *)
+			entry->map = (TSMapElement * *)
 				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(ListDictionary) * entry->lenmap);
-			memcpy(entry->map, maplists,
-				   sizeof(ListDictionary) * entry->lenmap);
+								   sizeof(TSMapElement *) * entry->lenmap);
+			memcpy(entry->map, mapconfigs,
+				   sizeof(TSMapElement *) * entry->lenmap);
 		}
 
 		entry->isvalid = true;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 8ca83c0..6047e26 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -14468,15 +14468,29 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 	PQclear(res);
 
 	resetPQExpBuffer(query);
-	appendPQExpBuffer(query,
-					  "SELECT\n"
-					  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
-					  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
-					  "FROM pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE m.mapcfg = '%u'\n"
-					  "ORDER BY m.mapcfg, m.maptokentype, m.mapseqno",
-					  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+	
+	if (fout->remoteVersion >= 110000)
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+	else
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, m.mapseqno\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
 	ntups = PQntuples(res);
@@ -14489,20 +14503,14 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 		char	   *tokenname = PQgetvalue(res, i, i_tokenname);
 		char	   *dictname = PQgetvalue(res, i, i_dictname);
 
-		if (i == 0 ||
-			strcmp(tokenname, PQgetvalue(res, i - 1, i_tokenname)) != 0)
-		{
-			/* starting a new token type, so start a new command */
-			if (i > 0)
-				appendPQExpBufferStr(q, ";\n");
-			appendPQExpBuffer(q, "\nALTER TEXT SEARCH CONFIGURATION %s\n",
-							  fmtId(cfginfo->dobj.name));
-			/* tokenname needs quoting, dictname does NOT */
-			appendPQExpBuffer(q, "    ADD MAPPING FOR %s WITH %s",
-							  fmtId(tokenname), dictname);
-		}
-		else
-			appendPQExpBuffer(q, ", %s", dictname);
+		/* starting a new token type, so start a new command */
+		if (i > 0)
+			appendPQExpBufferStr(q, ";\n");
+		appendPQExpBuffer(q, "\nALTER TEXT SEARCH CONFIGURATION %s\n",
+						  fmtId(cfginfo->dobj.name));
+		/* tokenname needs quoting, dictname does NOT */
+		appendPQExpBuffer(q, "    ADD MAPPING FOR %s WITH %s",
+						  fmtId(tokenname), dictname);
 	}
 
 	if (ntups > 0)
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 466a780..2ea565d 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -4610,25 +4610,41 @@ describeOneTSConfig(const char *oid, const char *nspname, const char *cfgname,
 
 	initPQExpBuffer(&buf);
 
-	printfPQExpBuffer(&buf,
-					  "SELECT\n"
-					  "  ( SELECT t.alias FROM\n"
-					  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
-					  "  pg_catalog.btrim(\n"
-					  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
-					  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
-					  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
-					  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
-					  "    ) :: pg_catalog.text,\n"
-					  "  '{}') AS \"%s\"\n"
-					  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
-					  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
-					  "ORDER BY 1;",
-					  gettext_noop("Token"),
-					  gettext_noop("Dictionaries"),
-					  oid);
+	if (pset.sversion >= 110000)
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  " dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+	else
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  "  pg_catalog.btrim(\n"
+						  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
+						  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
+						  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
+						  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
+						  "    ) :: pg_catalog.text,\n"
+						  "  '{}') AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+
 
 	res = PSQLexec(buf.data);
 	termPQExpBuffer(&buf);
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index 0bb8754..1dd4938 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -260,7 +260,7 @@ DECLARE_UNIQUE_INDEX(pg_ts_config_cfgname_index, 3608, on pg_ts_config using btr
 DECLARE_UNIQUE_INDEX(pg_ts_config_oid_index, 3712, on pg_ts_config using btree(oid oid_ops));
 #define TSConfigOidIndexId	3712
 
-DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops, mapseqno int4_ops));
+DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops));
 #define TSConfigMapIndexId	3609
 
 DECLARE_UNIQUE_INDEX(pg_ts_dict_dictname_index, 3604, on pg_ts_dict using btree(dictname name_ops, dictnamespace oid_ops));
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index f01648c..201ef17 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -4925,6 +4925,12 @@ DESCR("transform jsonb to tsvector");
 DATA(insert OID = 4212 (  to_tsvector		PGNSP PGUID 12 100 0 0 0 f f f f t f i s 2 0 3614 "3734 114" _null_ _null_ _null_ _null_ _null_ json_to_tsvector_byid _null_ _null_ _null_ ));
 DESCR("transform json to tsvector");
 
+DATA(insert OID = 8891 (  dictionary_mapping_to_text	PGNSP PGUID 12 100 0 0 0 f f f f t f s s 2 0 25 "26 23" _null_ _null_ _null_ _null_ _null_ dictionary_mapping_to_text _null_ _null_ _null_ ));
+DESCR("returns text representation of dictionary configuration map");
+
+DATA(insert OID = 8892 (  ts_debug			PGNSP PGUID 12 100 1 0 0 f f f f t t s s 2 0 2249 "3734 25" "{3734,25,25,25,25,3770,25,25,1009}" "{i,i,o,o,o,o,o,o,o}" "{cfgId,inputText,alias,description,token,dictionaries,configuration,command,lexemes}" _null_ _null_ ts_debug _null_ _null_ _null_));
+DESCR("debug function for text search configuration");
+
 DATA(insert OID = 3752 (  tsvector_update_trigger			PGNSP PGUID 12 1 0 0 0 f f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_byid _null_ _null_ _null_ ));
 DESCR("trigger for automatic update of tsvector column");
 DATA(insert OID = 3753 (  tsvector_update_trigger_column	PGNSP PGUID 12 1 0 0 0 f f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_bycolumn _null_ _null_ _null_ ));
diff --git a/src/include/catalog/pg_ts_config_map.h b/src/include/catalog/pg_ts_config_map.h
index a3d9e3f..6bcd44a 100644
--- a/src/include/catalog/pg_ts_config_map.h
+++ b/src/include/catalog/pg_ts_config_map.h
@@ -22,6 +22,7 @@
 #define PG_TS_CONFIG_MAP_H
 
 #include "catalog/genbki.h"
+#include "utils/jsonb.h"
 
 /* ----------------
  *		pg_ts_config_map definition.  cpp turns this into
@@ -30,49 +31,109 @@
  */
 #define TSConfigMapRelationId	3603
 
+/*
+ * Create a typedef in order to use same type name in
+ * generated DB initialization script and C source code
+ */
+typedef Jsonb jsonb;
+
 CATALOG(pg_ts_config_map,3603) BKI_WITHOUT_OIDS
 {
 	Oid			mapcfg;			/* OID of configuration owning this entry */
 	int32		maptokentype;	/* token type from parser */
-	int32		mapseqno;		/* order in which to consult dictionaries */
-	Oid			mapdict;		/* dictionary to consult */
+	jsonb		mapdicts;		/* dictionary map Jsonb representation */
 } FormData_pg_ts_config_map;
 
 typedef FormData_pg_ts_config_map *Form_pg_ts_config_map;
 
+/*
+ * Element of the mapping expression tree
+ */
+typedef struct TSMapElement
+{
+	int			type; /* Type of the element */
+	union
+	{
+		struct TSMapExpression *objectExpression;
+		struct TSMapCase *objectCase;
+		Oid			objectDictionary;
+		void	   *object;
+	} value;
+	struct TSMapElement *parent; /* Parent in the expression tree */
+} TSMapElement;
+
+/*
+ * Representation of expression with operator and two operands
+ */
+typedef struct TSMapExpression
+{
+	int			operator;
+	TSMapElement *left;
+	TSMapElement *right;
+} TSMapExpression;
+
+/*
+ * Representation of CASE structure inside database
+ */
+typedef struct TSMapCase
+{
+	TSMapElement *condition;
+	TSMapElement *command;
+	TSMapElement *elsebranch;
+	bool		match;	/* If false, NO MATCH is used */
+} TSMapCase;
+
 /* ----------------
- *		compiler constants for pg_ts_config_map
+ *		Compiler constants for pg_ts_config_map
  * ----------------
  */
-#define Natts_pg_ts_config_map				4
+#define Natts_pg_ts_config_map				3
 #define Anum_pg_ts_config_map_mapcfg		1
 #define Anum_pg_ts_config_map_maptokentype	2
-#define Anum_pg_ts_config_map_mapseqno		3
-#define Anum_pg_ts_config_map_mapdict		4
+#define Anum_pg_ts_config_map_mapdicts		3
+
+/* ----------------
+ *		Dictionary map operators
+ * ----------------
+ */
+#define TSMAP_OP_MAP			1
+#define TSMAP_OP_UNION			2
+#define TSMAP_OP_EXCEPT			3
+#define TSMAP_OP_INTERSECT		4
+#define TSMAP_OP_COMMA			5
+
+/* ----------------
+ *		TSMapElement object types
+ * ----------------
+ */
+#define TSMAP_EXPRESSION	1
+#define TSMAP_CASE			2
+#define TSMAP_DICTIONARY	3
+#define TSMAP_KEEP			4
 
 /* ----------------
  *		initial contents of pg_ts_config_map
  * ----------------
  */
 
-DATA(insert ( 3748	1	1	3765 ));
-DATA(insert ( 3748	2	1	3765 ));
-DATA(insert ( 3748	3	1	3765 ));
-DATA(insert ( 3748	4	1	3765 ));
-DATA(insert ( 3748	5	1	3765 ));
-DATA(insert ( 3748	6	1	3765 ));
-DATA(insert ( 3748	7	1	3765 ));
-DATA(insert ( 3748	8	1	3765 ));
-DATA(insert ( 3748	9	1	3765 ));
-DATA(insert ( 3748	10	1	3765 ));
-DATA(insert ( 3748	11	1	3765 ));
-DATA(insert ( 3748	15	1	3765 ));
-DATA(insert ( 3748	16	1	3765 ));
-DATA(insert ( 3748	17	1	3765 ));
-DATA(insert ( 3748	18	1	3765 ));
-DATA(insert ( 3748	19	1	3765 ));
-DATA(insert ( 3748	20	1	3765 ));
-DATA(insert ( 3748	21	1	3765 ));
-DATA(insert ( 3748	22	1	3765 ));
+DATA(insert ( 3748	1	"[3765]" ));
+DATA(insert ( 3748	2	"[3765]" ));
+DATA(insert ( 3748	3	"[3765]" ));
+DATA(insert ( 3748	4	"[3765]" ));
+DATA(insert ( 3748	5	"[3765]" ));
+DATA(insert ( 3748	6	"[3765]" ));
+DATA(insert ( 3748	7	"[3765]" ));
+DATA(insert ( 3748	8	"[3765]" ));
+DATA(insert ( 3748	9	"[3765]" ));
+DATA(insert ( 3748	10	"[3765]" ));
+DATA(insert ( 3748	11	"[3765]" ));
+DATA(insert ( 3748	15	"[3765]" ));
+DATA(insert ( 3748	16	"[3765]" ));
+DATA(insert ( 3748	17	"[3765]" ));
+DATA(insert ( 3748	18	"[3765]" ));
+DATA(insert ( 3748	19	"[3765]" ));
+DATA(insert ( 3748	20	"[3765]" ));
+DATA(insert ( 3748	21	"[3765]" ));
+DATA(insert ( 3748	22	"[3765]" ));
 
 #endif							/* PG_TS_CONFIG_MAP_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 74b094a..23eef6a 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -381,6 +381,9 @@ typedef enum NodeTag
 	T_CreateEnumStmt,
 	T_CreateRangeStmt,
 	T_AlterEnumStmt,
+	T_DictMapExprElem,
+	T_DictMapElem,
+	T_DictMapCase,
 	T_AlterTSDictionaryStmt,
 	T_AlterTSConfigurationStmt,
 	T_CreateFdwStmt,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 76a73b2..2fbeda9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3384,6 +3384,50 @@ typedef enum AlterTSConfigType
 	ALTER_TSCONFIG_DROP_MAPPING
 } AlterTSConfigType;
 
+/*
+ * TS Configuration expression tree element's types
+ */
+typedef enum DictMapElemType
+{
+	DICT_MAP_CASE,
+	DICT_MAP_EXPRESSION,
+	DICT_MAP_KEEP,
+	DICT_MAP_DICTIONARY
+} DictMapElemType;
+
+/*
+ * TS Configuration expression tree abstract element
+ */
+typedef struct DictMapElem
+{
+	NodeTag		type;
+	int8		kind;			/* See DictMapElemType */
+	void	   *data;			/* Type should be detected by kind value */
+} DictMapElem;
+
+/*
+ * TS Configuration expression tree element with operator and operands
+ */
+typedef struct DictMapExprElem
+{
+	NodeTag		type;
+	DictMapElem *left;
+	DictMapElem *right;
+	int8		oper;
+} DictMapExprElem;
+
+/*
+ * TS Configuration expression tree CASE element
+ */
+typedef struct DictMapCase
+{
+	NodeTag		type;
+	struct DictMapElem *condition;
+	struct DictMapElem *command;
+	struct DictMapElem *elsebranch;
+	bool		match;
+} DictMapCase;
+
 typedef struct AlterTSConfigurationStmt
 {
 	NodeTag		type;
@@ -3396,6 +3440,7 @@ typedef struct AlterTSConfigurationStmt
 	 */
 	List	   *tokentype;		/* list of Value strings */
 	List	   *dicts;			/* list of list of Value strings */
+	DictMapElem *dict_map;		/* tree of the mapping expression */
 	bool		override;		/* if true - remove old variant */
 	bool		replace;		/* if true - replace dictionary by another */
 	bool		missing_ok;		/* for DROP - skip error if missing? */
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 26af944..f56af7e 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -219,6 +219,7 @@ PG_KEYWORD("is", IS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isnull", ISNULL, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isolation", ISOLATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("join", JOIN, TYPE_FUNC_NAME_KEYWORD)
+PG_KEYWORD("keep", KEEP, RESERVED_KEYWORD)
 PG_KEYWORD("key", KEY, UNRESERVED_KEYWORD)
 PG_KEYWORD("label", LABEL, UNRESERVED_KEYWORD)
 PG_KEYWORD("language", LANGUAGE, UNRESERVED_KEYWORD)
@@ -241,6 +242,7 @@ PG_KEYWORD("location", LOCATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("lock", LOCK_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("locked", LOCKED, UNRESERVED_KEYWORD)
 PG_KEYWORD("logged", LOGGED, UNRESERVED_KEYWORD)
+PG_KEYWORD("map", MAP, UNRESERVED_KEYWORD)
 PG_KEYWORD("mapping", MAPPING, UNRESERVED_KEYWORD)
 PG_KEYWORD("match", MATCH, UNRESERVED_KEYWORD)
 PG_KEYWORD("materialized", MATERIALIZED, UNRESERVED_KEYWORD)
diff --git a/src/include/tsearch/ts_cache.h b/src/include/tsearch/ts_cache.h
index 410f1d5..4633dd7 100644
--- a/src/include/tsearch/ts_cache.h
+++ b/src/include/tsearch/ts_cache.h
@@ -14,6 +14,7 @@
 #define TS_CACHE_H
 
 #include "utils/guc.h"
+#include "catalog/pg_ts_config_map.h"
 
 
 /*
@@ -66,6 +67,7 @@ typedef struct
 {
 	int			len;
 	Oid		   *dictIds;
+	int32	   *dictOptions;
 } ListDictionary;
 
 typedef struct
@@ -77,7 +79,7 @@ typedef struct
 	Oid			prsId;
 
 	int			lenmap;
-	ListDictionary *map;
+	TSMapElement **map;
 } TSConfigCacheEntry;
 
 
diff --git a/src/include/tsearch/ts_configmap.h b/src/include/tsearch/ts_configmap.h
new file mode 100644
index 0000000..79e6180
--- /dev/null
+++ b/src/include/tsearch/ts_configmap.h
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.h
+ *	  internal representation of text search configuration and utilities for it
+ *
+ * Copyright (c) 1998-2018, PostgreSQL Global Development Group
+ *
+ * src/include/tsearch/ts_utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PG_TS_CONFIGMAP_H_
+#define _PG_TS_CONFIGMAP_H_
+
+#include "utils/jsonb.h"
+#include "catalog/pg_ts_config_map.h"
+
+/*
+ * Configuration storage functions
+ * Provide interface to convert ts_configuration into JSONB and vice versa
+ */
+
+/* Convert TSMapElement structure into JSONB */
+extern Jsonb *TSMapToJsonb(TSMapElement *config);
+
+/* Extract TSMapElement from JSONB formated data */
+extern TSMapElement *JsonbToTSMap(Jsonb *json);
+/* Replace all occurances of oldDict by newDict */
+extern void TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict);
+
+/* Move rule list into specified memory context */
+extern TSMapElement *TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context);
+/* Free all nodes of the rule list */
+extern void TSMapElementFree(TSMapElement *element);
+
+/* Print map in human-readable format */
+extern void TSMapPrintElement(TSMapElement *config, StringInfo result);
+
+/* Print dictionary name for a given Oid */
+extern void TSMapPrintDictName(Oid dictId, StringInfo result);
+
+/* Return all dictionaries used in config */
+extern Oid *TSMapGetDictionaries(TSMapElement *config);
+
+/* Do a deep comparison of two TSMapElements. Doesn't check parents of elements */
+extern bool TSMapElementEquals(TSMapElement *a, TSMapElement *b);
+
+#endif							/* _PG_TS_CONFIGMAP_H_ */
diff --git a/src/include/tsearch/ts_public.h b/src/include/tsearch/ts_public.h
index 0b7a5aa..d970eec 100644
--- a/src/include/tsearch/ts_public.h
+++ b/src/include/tsearch/ts_public.h
@@ -115,6 +115,7 @@ typedef struct
 #define TSL_ADDPOS		0x01
 #define TSL_PREFIX		0x02
 #define TSL_FILTER		0x04
+#define TSL_MULTI		0x08
 
 /*
  * Struct for supporting complex dictionaries like thesaurus.
diff --git a/src/test/regress/expected/oidjoins.out b/src/test/regress/expected/oidjoins.out
index 234b44f..40029f3 100644
--- a/src/test/regress/expected/oidjoins.out
+++ b/src/test/regress/expected/oidjoins.out
@@ -1081,14 +1081,6 @@ WHERE	mapcfg != 0 AND
 ------+--------
 (0 rows)
 
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
- ctid | mapdict 
-------+---------
-(0 rows)
-
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/expected/tsdicts.out b/src/test/regress/expected/tsdicts.out
index 0c1d7c7..04ac38b 100644
--- a/src/test/regress/expected/tsdicts.out
+++ b/src/test/regress/expected/tsdicts.out
@@ -420,6 +420,105 @@ SELECT ts_lexize('thesaurus', 'one');
  {1}
 (1 row)
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+SELECT to_tsvector('english_union', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'books');
+    to_tsvector     
+--------------------
+ 'book':1 'books':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+SELECT to_tsvector('english_intersect', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'books');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'booking');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+SELECT to_tsvector('english_except', 'book');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_except', 'books');
+ to_tsvector 
+-------------
+ 'books':1
+(1 row)
+
+SELECT to_tsvector('english_except', 'booking');
+ to_tsvector 
+-------------
+ 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+SELECT to_tsvector('english_branches', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -580,6 +679,153 @@ SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a
  'card':3,10 'invit':2,9 'like':6 'look':5 'order':1,8
 (1 row)
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                                     to_tsvector                                      
+--------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+              to_tsvector              
+---------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                     to_tsvector                     
+-----------------------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5 'supernova':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                              to_tsvector                               
+------------------------------------------------------------------------
+ '1987a':6 'mysterious':2 'of':4 'rings':3 'sn':5 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+            Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |                     Dictionaries                      
+-----------------+-------------------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN MATCH THEN simple UNION thesaurus+
+                 | ELSE simple                                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+      to_tsvector       
+------------------------
+ '12':1 'one':1 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+            to_tsvector            
+-----------------------------------
+ '123':1 'one':1 'three':3 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+           to_tsvector           
+---------------------------------
+ '12':1 'four':3 'one':1 'two':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+      Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |               Dictionaries               
+-----------------+------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN NO MATCH THEN simple+
+                 | ELSE thesaurus                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector    
+------------------
+ '12':1 'books':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector   
+-----------------
+ '12':1 'book':2
+(1 row)
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/expected/tsearch.out b/src/test/regress/expected/tsearch.out
index d63fb12..c0e9fc5 100644
--- a/src/test/regress/expected/tsearch.out
+++ b/src/test/regress/expected/tsearch.out
@@ -36,11 +36,11 @@ WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 -----+---------
 (0 rows)
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
- mapcfg | maptokentype | mapseqno 
---------+--------------+----------
+WHERE mapcfg = 0;
+ mapcfg | maptokentype 
+--------+--------------
 (0 rows)
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
@@ -51,8 +51,8 @@ RIGHT JOIN pg_ts_config_map AS m
     ON (tt.cfgid=m.mapcfg AND tt.tokid=m.maptokentype)
 WHERE
     tt.cfgid IS NULL OR tt.tokid IS NULL;
- cfgid | tokid | mapcfg | maptokentype | mapseqno | mapdict 
--------+-------+--------+--------------+----------+---------
+ cfgid | tokid | mapcfg | maptokentype | mapdicts 
+-------+-------+--------+--------------+----------
 (0 rows)
 
 -- test basic text search behavior without indexes, then with
@@ -567,55 +567,55 @@ SELECT length(to_tsvector('english', '345 qwe@efd.r '' http://www.com/ http://ae
 
 -- ts_debug
 SELECT * from ts_debug('english', '<myns:foo-bar_baz.blurfl>abc&nm1;def&#xa9;ghi&#245;jkl</myns:foo-bar_baz.blurfl>');
-   alias   |   description   |           token            |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+----------------------------+----------------+--------------+---------
- tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |              | 
- asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem | {abc}
- entity    | XML entity      | &nm1;                      | {}             |              | 
- asciiword | Word, all ASCII | def                        | {english_stem} | english_stem | {def}
- entity    | XML entity      | &#xa9;                     | {}             |              | 
- asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem | {ghi}
- entity    | XML entity      | &#245;                     | {}             |              | 
- asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem | {jkl}
- tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |              | 
+   alias   |   description   |           token            |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+----------------------------+----------------+---------------+--------------+---------
+ tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |               |              | 
+ asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem  | english_stem | {abc}
+ entity    | XML entity      | &nm1;                      | {}             |               |              | 
+ asciiword | Word, all ASCII | def                        | {english_stem} | english_stem  | english_stem | {def}
+ entity    | XML entity      | &#xa9;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem  | english_stem | {ghi}
+ entity    | XML entity      | &#245;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem  | english_stem | {jkl}
+ tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |               |              | 
 (9 rows)
 
 -- check parsing of URLs
 SELECT * from ts_debug('english', 'http://www.harewoodsolutions.co.uk/press.aspx</span>');
-  alias   |  description  |                 token                  | dictionaries | dictionary |                 lexemes                  
-----------+---------------+----------------------------------------+--------------+------------+------------------------------------------
- protocol | Protocol head | http://                                | {}           |            | 
- url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple     | {www.harewoodsolutions.co.uk/press.aspx}
- host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple     | {www.harewoodsolutions.co.uk}
- url_path | URL path      | /press.aspx                            | {simple}     | simple     | {/press.aspx}
- tag      | XML tag       | </span>                                | {}           |            | 
+  alias   |  description  |                 token                  | dictionaries | configuration | command |                 lexemes                  
+----------+---------------+----------------------------------------+--------------+---------------+---------+------------------------------------------
+ protocol | Protocol head | http://                                | {}           |               |         | 
+ url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk/press.aspx}
+ host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk}
+ url_path | URL path      | /press.aspx                            | {simple}     | simple        | simple  | {/press.aspx}
+ tag      | XML tag       | </span>                                | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
-  alias   |  description  |           token            | dictionaries | dictionary |           lexemes            
-----------+---------------+----------------------------+--------------+------------+------------------------------
- protocol | Protocol head | http://                    | {}           |            | 
- url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple     | {aew.wer0c.ewr/id?ad=qwe&dw}
- host     | Host          | aew.wer0c.ewr              | {simple}     | simple     | {aew.wer0c.ewr}
- url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple     | {/id?ad=qwe&dw}
- tag      | XML tag       | <span>                     | {}           |            | 
+  alias   |  description  |           token            | dictionaries | configuration | command |           lexemes            
+----------+---------------+----------------------------+--------------+---------------+---------+------------------------------
+ protocol | Protocol head | http://                    | {}           |               |         | 
+ url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple        | simple  | {aew.wer0c.ewr/id?ad=qwe&dw}
+ host     | Host          | aew.wer0c.ewr              | {simple}     | simple        | simple  | {aew.wer0c.ewr}
+ url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple        | simple  | {/id?ad=qwe&dw}
+ tag      | XML tag       | <span>                     | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
-  alias   |  description  |        token         | dictionaries | dictionary |        lexemes         
-----------+---------------+----------------------+--------------+------------+------------------------
- protocol | Protocol head | http://              | {}           |            | 
- url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple     | {5aew.werc.ewr:8100/?}
- host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path      | /?                   | {simple}     | simple     | {/?}
+  alias   |  description  |        token         | dictionaries | configuration | command |        lexemes         
+----------+---------------+----------------------+--------------+---------------+---------+------------------------
+ protocol | Protocol head | http://              | {}           |               |         | 
+ url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?}
+ host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path      | /?                   | {simple}     | simple        | simple  | {/?}
 (4 rows)
 
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
-  alias   | description |         token          | dictionaries | dictionary |         lexemes          
-----------+-------------+------------------------+--------------+------------+--------------------------
- url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple     | {5aew.werc.ewr:8100/?xx}
- host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path    | /?xx                   | {simple}     | simple     | {/?xx}
+  alias   | description |         token          | dictionaries | configuration | command |         lexemes          
+----------+-------------+------------------------+--------------+---------------+---------+--------------------------
+ url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?xx}
+ host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path    | /?xx                   | {simple}     | simple        | simple  | {/?xx}
 (3 rows)
 
 SELECT token, alias,
diff --git a/src/test/regress/sql/oidjoins.sql b/src/test/regress/sql/oidjoins.sql
index fcf9990..320e220 100644
--- a/src/test/regress/sql/oidjoins.sql
+++ b/src/test/regress/sql/oidjoins.sql
@@ -541,10 +541,6 @@ SELECT	ctid, mapcfg
 FROM	pg_catalog.pg_ts_config_map fk
 WHERE	mapcfg != 0 AND
 	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_config pk WHERE pk.oid = fk.mapcfg);
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/sql/tsdicts.sql b/src/test/regress/sql/tsdicts.sql
index 1633c0d..8662820 100644
--- a/src/test/regress/sql/tsdicts.sql
+++ b/src/test/regress/sql/tsdicts.sql
@@ -117,6 +117,57 @@ CREATE TEXT SEARCH DICTIONARY thesaurus (
 
 SELECT ts_lexize('thesaurus', 'one');
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+
+SELECT to_tsvector('english_union', 'book');
+SELECT to_tsvector('english_union', 'books');
+SELECT to_tsvector('english_union', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+
+SELECT to_tsvector('english_intersect', 'book');
+SELECT to_tsvector('english_intersect', 'books');
+SELECT to_tsvector('english_intersect', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+
+SELECT to_tsvector('english_except', 'book');
+SELECT to_tsvector('english_except', 'books');
+SELECT to_tsvector('english_except', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+
+SELECT to_tsvector('english_branches', 'book');
+SELECT to_tsvector('english_branches', 'books');
+SELECT to_tsvector('english_branches', 'booking');
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -189,6 +240,43 @@ SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one');
 SELECT to_tsvector('thesaurus_tst', 'Supernovae star is very new star and usually called supernovae (abbreviation SN)');
 SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a tickets');
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/sql/tsearch.sql b/src/test/regress/sql/tsearch.sql
index 1c8520b..6f8af63 100644
--- a/src/test/regress/sql/tsearch.sql
+++ b/src/test/regress/sql/tsearch.sql
@@ -26,9 +26,9 @@ SELECT oid, cfgname
 FROM pg_ts_config
 WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
+WHERE mapcfg = 0;
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
 SELECT * FROM
#19Aleksander Alekseev
a.alekseev@postgrespro.ru
In reply to: Aleksandr Parfenov (#18)
Re: Flexible configuration for full-text search

The following review has been posted through the commitfest application:
make installcheck-world: not tested
Implements feature: not tested
Spec compliant: not tested
Documentation: not tested

Unfortunately this patch doesn't apply anymore: http://commitfest.cputube.org/

#20Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Aleksander Alekseev (#19)
1 attachment(s)
Re: Flexible configuration for full-text search

On Mon, 05 Mar 2018 12:59:21 +0000
Aleksander Alekseev <a.alekseev@postgrespro.ru> wrote:

The following review has been posted through the commitfest
application: make installcheck-world: not tested
Implements feature: not tested
Spec compliant: not tested
Documentation: not tested

Unfortunately this patch doesn't apply anymore:
http://commitfest.cputube.org/

Thank you for noticing that. A refreshed version of the patch is
attached.

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

Attachments:

0001-flexible-fts-configuration-v9.patchtext/x-patchDownload
diff --git a/contrib/unaccent/expected/unaccent.out b/contrib/unaccent/expected/unaccent.out
index b93105e..37b9337 100644
--- a/contrib/unaccent/expected/unaccent.out
+++ b/contrib/unaccent/expected/unaccent.out
@@ -61,3 +61,14 @@ SELECT ts_lexize('unaccent', '
  {����}
 (1 row)
 
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
+         to_tsvector          
+------------------------------
+ 'foobar':1 '�����':2 '���':3
+(1 row)
+
diff --git a/contrib/unaccent/sql/unaccent.sql b/contrib/unaccent/sql/unaccent.sql
index 3102139..6ce21cd 100644
--- a/contrib/unaccent/sql/unaccent.sql
+++ b/contrib/unaccent/sql/unaccent.sql
@@ -2,7 +2,6 @@ CREATE EXTENSION unaccent;
 
 -- must have a UTF8 database
 SELECT getdatabaseencoding();
-
 SET client_encoding TO 'KOI8';
 
 SELECT unaccent('foobar');
@@ -16,3 +15,12 @@ SELECT unaccent('unaccent', '
 SELECT ts_lexize('unaccent', 'foobar');
 SELECT ts_lexize('unaccent', '����');
 SELECT ts_lexize('unaccent', '����');
+
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml
index ebe0b94..ecc3704 100644
--- a/doc/src/sgml/ref/alter_tsconfig.sgml
+++ b/doc/src/sgml/ref/alter_tsconfig.sgml
@@ -22,8 +22,12 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING REPLACE <replaceable class="parameter">old_dictionary</replaceable> WITH <replaceable class="parameter">new_dictionary</replaceable>
@@ -89,6 +93,17 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
    </varlistentry>
 
    <varlistentry>
+    <term><replaceable class="parameter">config</replaceable></term>
+    <listitem>
+     <para>
+      The dictionaries tree expression. The dictionary expression
+      is a triple of condition/command/else that define way to process
+      the text. The <literal>ELSE</literal> part is optional.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry>
     <term><replaceable class="parameter">old_dictionary</replaceable></term>
     <listitem>
      <para>
@@ -133,7 +148,7 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
      </para>
     </listitem>
    </varlistentry>
- </variablelist>
+  </variablelist>
 
   <para>
    The <literal>ADD MAPPING FOR</literal> form installs a list of dictionaries to be
@@ -155,6 +170,53 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
  </refsect1>
 
  <refsect1>
+  <title>Dictionaries Map Configuration</title>
+
+  <refsect2>
+   <title>Format</title>
+   <para>
+    Formally <replaceable class="parameter">config</replaceable> is one of:
+   </para>
+   <programlisting>
+    * dictionary_name
+
+    * config { UNION | INTERSECT | EXCEPT | MAP } config
+
+    * CASE config
+        WHEN [ NO ] MATCH THEN { KEEP | config }
+        [ ELSE config ]
+      END
+   </programlisting>
+  </refsect2>
+
+  <refsect2>
+   <title>Description</title>
+   <para>
+    <replaceable class="parameter">config</replaceable> can be used
+    in three different formats. The most simple format is name of dictionary to
+    use for tokens processing.
+   </para>
+   <para>
+    In order to use more than one dictionary
+    simultaneously user should interconnect dictionaries by operators. Operators
+    <literal>UNION</literal>, <literal>EXCEPT</literal> and
+    <literal>INTERSECT</literal> have same meaning as in operations on sets.
+    Special operator <literal>MAP</literal> gets output of left subexpression
+    and uses it as an input to right subexpression.
+   </para>
+   <para>
+    The third format of <replaceable class="parameter">config</replaceable> is similar to
+    <literal>CASE/WHEN/THEN/ELSE</literal> structure. It's consists of three
+    replaceable parts. First one is configuration which is used to construct lexemes set
+    for matching condition. If the condition is triggered, the command is executed.
+    Use command <literal>KEEP</literal> to avoid repeating of the same
+    configuration in condition and command part. However, command may differ from
+    the condition. The <literal>ELSE</literal> branch is executed otherwise.
+   </para>
+  </refsect2>
+ </refsect1>
+
+ <refsect1>
   <title>Examples</title>
 
   <para>
@@ -167,6 +229,34 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
 ALTER TEXT SEARCH CONFIGURATION my_config
   ALTER MAPPING REPLACE english WITH swedish;
 </programlisting>
+
+  <para>
+   Next example shows how to analyse documents in both English and German languages.
+   <literal>english_hunspell</literal> and <literal>german_hunspell</literal>
+   return result only if a word is recognized. Otherwise, stemmer dictionaries
+   are used to process a token.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH
+   CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+    UNION
+   CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+    In order to combine search for both exact and processed forms the vector
+    should contain lexemes produced by <literal>simple</literal> for exact form
+    of the word as well as lexemes produced by linguistic-aware dictionary
+    (e.g. <literal>english_stem</literal>) for processed forms.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml
index 610b7bf..1253b41 100644
--- a/doc/src/sgml/textsearch.sgml
+++ b/doc/src/sgml/textsearch.sgml
@@ -732,10 +732,11 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     The <function>to_tsvector</function> function internally calls a parser
     which breaks the document text into tokens and assigns a type to
     each token.  For each token, a list of
-    dictionaries (<xref linkend="textsearch-dictionaries"/>) is consulted,
-    where the list can vary depending on the token type.  The first dictionary
-    that <firstterm>recognizes</firstterm> the token emits one or more normalized
-    <firstterm>lexemes</firstterm> to represent the token.  For example,
+    condition/command pairs is consulted, where the list can vary depending
+    on the token type, condition and command are expressions on dictionaries
+    with matching clause in condition(<xref linkend="textsearch-dictionaries"/>).
+    The first command combined with true-resulted condition emits one or more normalized
+    <firstterm>lexemes</firstterm> to represent the token. For example,
     <literal>rats</literal> became <literal>rat</literal> because one of the
     dictionaries recognized that the word <literal>rats</literal> is a plural
     form of <literal>rat</literal>.  Some words are recognized as
@@ -743,7 +744,7 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     causes them to be ignored since they occur too frequently to be useful in
     searching.  In our example these are
     <literal>a</literal>, <literal>on</literal>, and <literal>it</literal>.
-    If no dictionary in the list recognizes the token then it is also ignored.
+    If none of conditions is <literal>true</literal> the token is ignored.
     In this example that happened to the punctuation sign <literal>-</literal>
     because there are in fact no dictionaries assigned for its token type
     (<literal>Space symbols</literal>), meaning space tokens will never be
@@ -2232,8 +2233,8 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
      <para>
       a single lexeme with the <literal>TSL_FILTER</literal> flag set, to replace
       the original token with a new token to be passed to subsequent
-      dictionaries (a dictionary that does this is called a
-      <firstterm>filtering dictionary</firstterm>)
+      dictionaries in a comma-separated syntax (a dictionary that does this
+      is called a <firstterm>filtering dictionary</firstterm>)
      </para>
     </listitem>
     <listitem>
@@ -2265,38 +2266,126 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
    type that the parser can return, a separate list of dictionaries is
    specified by the configuration.  When a token of that type is found
    by the parser, each dictionary in the list is consulted in turn,
-   until some dictionary recognizes it as a known word.  If it is identified
-   as a stop word, or if no dictionary recognizes the token, it will be
-   discarded and not indexed or searched for.
-   Normally, the first dictionary that returns a non-<literal>NULL</literal>
-   output determines the result, and any remaining dictionaries are not
-   consulted; but a filtering dictionary can replace the given word
-   with a modified word, which is then passed to subsequent dictionaries.
+   until command is not selected based on its condition. If none of cases is
+   selected token will be discarded and not indexed or searched for.
   </para>
 
   <para>
-   The general rule for configuring a list of dictionaries
-   is to place first the most narrow, most specific dictionary, then the more
-   general dictionaries, finishing with a very general dictionary, like
+   A tree of cases is described as condition/command/else triples. Each
+   condition is evaluated in order to select appropriate command to generate
+   resulted set of lexemes.
+  </para>
+
+  <para>
+   A condition is an expression with dictionaries used as operands and
+   basic set operators <literal>UNION</literal>, <literal>EXCEPT</literal>, <literal>INTERSECT</literal>
+   and special operator <literal>MAP</literal>.
+   Special operator <literal>MAP</literal> use output of left subexpression as
+   input for right subexpression.
+  </para>
+
+  <para>
+    Rules to write command are same as for condition with additional keyword
+    <literal>KEEP</literal> considered to use the result of the condition as an output.
+  </para>
+
+  <para>
+   A comma-separated list of dictionaries is a simplified variant of text
+   search configuration. Each dictionary consulted to process a token and first
+   non-<literal>NULL</literal> output is accepted as a processing result.
+  </para>
+
+  <para>
+   The general rule for configuring tokens processing
+   is to place first case with the most narrow, most specific dictionary, then the more
+   general dictionaries, finishing with a very general dictionaries, like
    a <application>Snowball</application> stemmer or <literal>simple</literal>, which
-   recognizes everything.  For example, for an astronomy-specific search
+   recognizes everything. For example, for an astronomy-specific search
    (<literal>astro_en</literal> configuration) one could bind token type
    <type>asciiword</type> (ASCII word) to a synonym dictionary of astronomical
    terms, a general English dictionary and a <application>Snowball</application> English
-   stemmer:
+   stemmer in comma-separated variant of mapping:
+  </para>
 
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION astro_en
     ADD MAPPING FOR asciiword WITH astrosyn, english_ispell, english_stem;
 </programlisting>
+
+  <para>
+   Another example is a configuration for both English and German languages via
+   operator-separated variant of mapping:
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION multi_en_de
+    ADD MAPPING FOR asciiword, word WITH
+        CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+         UNION
+        CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+   This configuration provides an ability to search on collection of multilingual
+   documents without specifying language:
+  </para>
+
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'lack');
+ id |                   txt
+----+-----------------------------------------
+  2 | with old stars and lacking gas and dust
+
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'beginnen');
+ id |                 txt
+----+-------------------------------------
+  1 | Das geschah zu Beginn dieses Monats
+</programlisting>
+
+  <para>
+   A combination of stemmer dictionary with <literal>simple</literal> one may be used to mix
+   search for exact form of one word and linguistic search for others.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION exact_and_linguistic
+    ADD MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
+  <para>
+   In the following example a simple dictionary is used to prevent words from normalization in query.
   </para>
 
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Supernova star'),
+                              (2, 'Supernova stars'))
+SELECT * FROM docs WHERE to_tsvector('exact_and_linguistic', txt) @@ (to_tsquery('simple', 'stars') &amp;&amp; to_tsquery('english', 'supernovae'));
+ id |       txt       
+----+-----------------
+  2 | Supernova stars
+</programlisting>
+
+   <caution>
+    <para>
+     Due to lack of information about origin of each lexeme in <literal>tsvector</literal> may
+     lead to false-positive triggers in case of stemmed form being used as exact form in a query.
+    </para>
+   </caution>
+
   <para>
-   A filtering dictionary can be placed anywhere in the list, except at the
-   end where it'd be useless.  Filtering dictionaries are useful to partially
+   Filtering dictionaries are useful to partially
    normalize words to simplify the task of later dictionaries.  For example,
    a filtering dictionary could be used to remove accents from accented
    letters, as is done by the <xref linkend="unaccent"/> module.
+   Filter dictionary should be placed at left of <literal>MAP</literal>
+   operator. If filter dictionary returns <literal>NULL</literal> it pass initial token
+   to the right subexpression.
   </para>
 
   <sect2 id="textsearch-stopwords">
@@ -2463,9 +2552,9 @@ SELECT ts_lexize('public.simple_dict','The');
 
 <screen>
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | Paris | {english_stem} | english_stem  | english_stem | {pari}
 
 CREATE TEXT SEARCH DICTIONARY my_synonym (
     TEMPLATE = synonym,
@@ -2477,9 +2566,12 @@ ALTER TEXT SEARCH CONFIGURATION english
     WITH my_synonym, english_stem;
 
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |       dictionaries        | dictionary | lexemes 
------------+-----------------+-------+---------------------------+------------+---------
- asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris}
+   alias   |   description   | token |       dictionaries        |                configuration                |  command   | lexemes 
+-----------+-----------------+-------+---------------------------+---------------------------------------------+------------+---------
+ asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | CASE my_synonym WHEN MATCH THEN KEEP       +| my_synonym | {paris}
+           |                 |       |                           | ELSE CASE english_stem WHEN MATCH THEN KEEP+|            | 
+           |                 |       |                           | END                                        +|            | 
+           |                 |       |                           | END                                         |            | 
 </screen>
    </para>
 
@@ -3108,6 +3200,21 @@ CREATE TEXT SEARCH DICTIONARY english_ispell (
 ALTER TEXT SEARCH CONFIGURATION pg
     ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
                       word, hword, hword_part
+    WITH 
+      CASE pg_dict WHEN MATCH THEN KEEP
+      ELSE
+          CASE english_ispell WHEN MATCH THEN KEEP
+          ELSE english_stem
+          END
+      END;
+</programlisting>
+
+    Or use alternative comma-separated syntax:
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION pg
+    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
+                      word, hword, hword_part
     WITH pg_dict, english_ispell, english_stem;
 </programlisting>
 
@@ -3183,7 +3290,8 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
          OUT <replaceable class="parameter">description</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">token</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">dictionaries</replaceable> <type>regdictionary[]</type>,
-         OUT <replaceable class="parameter">dictionary</replaceable> <type>regdictionary</type>,
+         OUT <replaceable class="parameter">configuration</replaceable> <type>text</type>,
+         OUT <replaceable class="parameter">command</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)
          returns setof record
 </synopsis>
@@ -3227,14 +3335,20 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionary</replaceable> <type>regdictionary</type> &mdash; the dictionary
-       that recognized the token, or <literal>NULL</literal> if none did
+       <replaceable>configuration</replaceable> <type>text</type> &mdash; the
+       configuration defined for this token type
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       <replaceable>command</replaceable> <type>text</type> &mdash; the command that describes
+       the way the output was produced
       </para>
      </listitem>
      <listitem>
       <para>
        <replaceable>lexemes</replaceable> <type>text[]</type> &mdash; the lexeme(s) produced
-       by the dictionary that recognized the token, or <literal>NULL</literal> if
+       by the command selected according conditions, or <literal>NULL</literal> if
        none did; an empty array (<literal>{}</literal>) means it was recognized as a
        stop word
       </para>
@@ -3247,32 +3361,32 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
 
 <screen>
 SELECT * FROM ts_debug('english','a fat  cat sat on a mat - it ate a fat rats');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | cat   | {english_stem} | english_stem | {cat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | sat   | {english_stem} | english_stem | {sat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | on    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | mat   | {english_stem} | english_stem | {mat}
- blank     | Space symbols   |       | {}             |              | 
- blank     | Space symbols   | -     | {}             |              | 
- asciiword | Word, all ASCII | it    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | ate   | {english_stem} | english_stem | {ate}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | rats  | {english_stem} | english_stem | {rat}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | cat   | {english_stem} | english_stem  | english_stem | {cat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | sat   | {english_stem} | english_stem  | english_stem | {sat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | on    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | mat   | {english_stem} | english_stem  | english_stem | {mat}
+ blank     | Space symbols   |       |                |               |              | 
+ blank     | Space symbols   | -     |                |               |              | 
+ asciiword | Word, all ASCII | it    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | ate   | {english_stem} | english_stem  | english_stem | {ate}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | rats  | {english_stem} | english_stem  | english_stem | {rat}
 </screen>
   </para>
 
@@ -3298,13 +3412,22 @@ ALTER TEXT SEARCH CONFIGURATION public.english
 
 <screen>
 SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
-   alias   |   description   |    token    |         dictionaries          |   dictionary   |   lexemes   
------------+-----------------+-------------+-------------------------------+----------------+-------------
- asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | english_ispell | {}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | english_ispell | {bright}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | english_stem   | {supernova}
+   alias   |   description   |    token    |         dictionaries          |                configuration                |     command      |   lexemes   
+-----------+-----------------+-------------+-------------------------------+---------------------------------------------+------------------+-------------
+ asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {bright}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_stem     | {supernova}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
 </screen>
 
   <para>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 5e6e8a6..c43c9b2 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -946,55 +946,14 @@ GRANT SELECT (subdbid, subname, subowner, subenabled, subslotname, subpublicatio
 -- Tsearch debug function.  Defined here because it'd be pretty unwieldy
 -- to put it into pg_proc.h
 
-CREATE FUNCTION ts_debug(IN config regconfig, IN document text,
-    OUT alias text,
-    OUT description text,
-    OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
-    OUT lexemes text[])
-RETURNS SETOF record AS
-$$
-SELECT
-    tt.alias AS alias,
-    tt.description AS description,
-    parse.token AS token,
-    ARRAY ( SELECT m.mapdict::pg_catalog.regdictionary
-            FROM pg_catalog.pg_ts_config_map AS m
-            WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-            ORDER BY m.mapseqno )
-    AS dictionaries,
-    ( SELECT mapdict::pg_catalog.regdictionary
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS dictionary,
-    ( SELECT pg_catalog.ts_lexize(mapdict, parse.token)
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS lexemes
-FROM pg_catalog.ts_parse(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 ), $2
-    ) AS parse,
-     pg_catalog.ts_token_type(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 )
-    ) AS tt
-WHERE tt.tokid = parse.tokid
-$$
-LANGUAGE SQL STRICT STABLE PARALLEL SAFE;
-
-COMMENT ON FUNCTION ts_debug(regconfig,text) IS
-    'debug function for text search configuration';
 
 CREATE FUNCTION ts_debug(IN document text,
     OUT alias text,
     OUT description text,
     OUT token text,
     OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
+    OUT configuration text,
+    OUT command text,
     OUT lexemes text[])
 RETURNS SETOF record AS
 $$
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index 3a84351..53ee576 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -39,9 +39,12 @@
 #include "nodes/makefuncs.h"
 #include "parser/parse_func.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_public.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/jsonb.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "utils/syscache.h"
@@ -935,11 +938,22 @@ makeConfigurationDependencies(HeapTuple tuple, bool removeOld,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			TSMapElement *mapdicts = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			Oid		   *dictionaryOids = TSMapGetDictionaries(mapdicts);
+			Oid		   *currentOid = dictionaryOids;
 
-			referenced.classId = TSDictionaryRelationId;
-			referenced.objectId = cfgmap->mapdict;
-			referenced.objectSubId = 0;
-			add_exact_object_address(&referenced, addrs);
+			while (*currentOid != InvalidOid)
+			{
+				referenced.classId = TSDictionaryRelationId;
+				referenced.objectId = *currentOid;
+				referenced.objectSubId = 0;
+				add_exact_object_address(&referenced, addrs);
+
+				currentOid++;
+			}
+
+			pfree(dictionaryOids);
+			TSMapElementFree(mapdicts);
 		}
 
 		systable_endscan(scan);
@@ -1091,8 +1105,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			mapvalues[Anum_pg_ts_config_map_mapcfg - 1] = cfgOid;
 			mapvalues[Anum_pg_ts_config_map_maptokentype - 1] = cfgmap->maptokentype;
-			mapvalues[Anum_pg_ts_config_map_mapseqno - 1] = cfgmap->mapseqno;
-			mapvalues[Anum_pg_ts_config_map_mapdict - 1] = cfgmap->mapdict;
+			mapvalues[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(&cfgmap->mapdicts);
 
 			newmaptup = heap_form_tuple(mapRel->rd_att, mapvalues, mapnulls);
 
@@ -1195,7 +1208,7 @@ AlterTSConfiguration(AlterTSConfigurationStmt *stmt)
 	relMap = heap_open(TSConfigMapRelationId, RowExclusiveLock);
 
 	/* Add or drop mappings */
-	if (stmt->dicts)
+	if (stmt->dicts || stmt->dict_map)
 		MakeConfigurationMapping(stmt, tup, relMap);
 	else if (stmt->tokentype)
 		DropConfigurationMapping(stmt, tup, relMap);
@@ -1271,6 +1284,59 @@ getTokenTypes(Oid prsId, List *tokennames)
 }
 
 /*
+ * Parse parse node extracted from dictionary mapping and transform it into
+ * internal representation of dictionary mapping.
+ */
+static TSMapElement *
+ParseTSMapConfig(DictMapElem *elem)
+{
+	TSMapElement *result = palloc0(sizeof(TSMapElement));
+
+	if (elem->kind == DICT_MAP_CASE)
+	{
+		TSMapCase  *caseObject = palloc0(sizeof(TSMapCase));
+		DictMapCase *caseASTObject = elem->data;
+
+		caseObject->condition = ParseTSMapConfig(caseASTObject->condition);
+		caseObject->command = ParseTSMapConfig(caseASTObject->command);
+
+		if (caseASTObject->elsebranch)
+			caseObject->elsebranch = ParseTSMapConfig(caseASTObject->elsebranch);
+
+		caseObject->match = caseASTObject->match;
+
+		caseObject->condition->parent = result;
+		caseObject->command->parent = result;
+
+		result->type = TSMAP_CASE;
+		result->value.objectCase = caseObject;
+	}
+	else if (elem->kind == DICT_MAP_EXPRESSION)
+	{
+		TSMapExpression *expression = palloc0(sizeof(TSMapExpression));
+		DictMapExprElem *expressionAST = elem->data;
+
+		expression->left = ParseTSMapConfig(expressionAST->left);
+		expression->right = ParseTSMapConfig(expressionAST->right);
+		expression->operator = expressionAST->oper;
+
+		result->type = TSMAP_EXPRESSION;
+		result->value.objectExpression = expression;
+	}
+	else if (elem->kind == DICT_MAP_KEEP)
+	{
+		result->value.objectExpression = NULL;
+		result->type = TSMAP_KEEP;
+	}
+	else if (elem->kind == DICT_MAP_DICTIONARY)
+	{
+		result->value.objectDictionary = get_ts_dict_oid(elem->data, false);
+		result->type = TSMAP_DICTIONARY;
+	}
+	return result;
+}
+
+/*
  * ALTER TEXT SEARCH CONFIGURATION ADD/ALTER MAPPING
  */
 static void
@@ -1286,8 +1352,9 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	Oid			prsId;
 	int		   *tokens,
 				ntoken;
-	Oid		   *dictIds;
-	int			ndict;
+	Oid		   *dictIds = NULL;
+	int			ndict = 0;
+	TSMapElement *config = NULL;
 	ListCell   *c;
 
 	prsId = ((Form_pg_ts_config) GETSTRUCT(tup))->cfgparser;
@@ -1326,15 +1393,18 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	/*
 	 * Convert list of dictionary names to array of dict OIDs
 	 */
-	ndict = list_length(stmt->dicts);
-	dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
-	i = 0;
-	foreach(c, stmt->dicts)
+	if (stmt->dicts)
 	{
-		List	   *names = (List *) lfirst(c);
+		ndict = list_length(stmt->dicts);
+		dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
+		i = 0;
+		foreach(c, stmt->dicts)
+		{
+			List	   *names = (List *) lfirst(c);
 
-		dictIds[i] = get_ts_dict_oid(names, false);
-		i++;
+			dictIds[i] = get_ts_dict_oid(names, false);
+			i++;
+		}
 	}
 
 	if (stmt->replace)
@@ -1356,6 +1426,10 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			Datum		repl_val[Natts_pg_ts_config_map];
+			bool		repl_null[Natts_pg_ts_config_map];
+			bool		repl_repl[Natts_pg_ts_config_map];
+			HeapTuple	newtup;
 
 			/*
 			 * check if it's one of target token types
@@ -1379,25 +1453,21 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 			/*
 			 * replace dictionary if match
 			 */
-			if (cfgmap->mapdict == dictOld)
-			{
-				Datum		repl_val[Natts_pg_ts_config_map];
-				bool		repl_null[Natts_pg_ts_config_map];
-				bool		repl_repl[Natts_pg_ts_config_map];
-				HeapTuple	newtup;
-
-				memset(repl_val, 0, sizeof(repl_val));
-				memset(repl_null, false, sizeof(repl_null));
-				memset(repl_repl, false, sizeof(repl_repl));
-
-				repl_val[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictNew);
-				repl_repl[Anum_pg_ts_config_map_mapdict - 1] = true;
-
-				newtup = heap_modify_tuple(maptup,
-										   RelationGetDescr(relMap),
-										   repl_val, repl_null, repl_repl);
-				CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
-			}
+			config = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			TSMapReplaceDictionary(config, dictOld, dictNew);
+
+			memset(repl_val, 0, sizeof(repl_val));
+			memset(repl_null, false, sizeof(repl_null));
+			memset(repl_repl, false, sizeof(repl_repl));
+
+			repl_val[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
+			repl_repl[Anum_pg_ts_config_map_mapdicts - 1] = true;
+
+			newtup = heap_modify_tuple(maptup,
+									   RelationGetDescr(relMap),
+									   repl_val, repl_null, repl_repl);
+			CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
+			pfree(config);
 		}
 
 		systable_endscan(scan);
@@ -1407,24 +1477,22 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		/*
 		 * Insertion of new entries
 		 */
+		config = ParseTSMapConfig(stmt->dict_map);
+
 		for (i = 0; i < ntoken; i++)
 		{
-			for (j = 0; j < ndict; j++)
-			{
-				Datum		values[Natts_pg_ts_config_map];
-				bool		nulls[Natts_pg_ts_config_map];
+			Datum		values[Natts_pg_ts_config_map];
+			bool		nulls[Natts_pg_ts_config_map];
 
-				memset(nulls, false, sizeof(nulls));
-				values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
-				values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
-				values[Anum_pg_ts_config_map_mapseqno - 1] = Int32GetDatum(j + 1);
-				values[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictIds[j]);
+			memset(nulls, false, sizeof(nulls));
+			values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
+			values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
+			values[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
 
-				tup = heap_form_tuple(relMap->rd_att, values, nulls);
-				CatalogTupleInsert(relMap, tup);
+			tup = heap_form_tuple(relMap->rd_att, values, nulls);
+			CatalogTupleInsert(relMap, tup);
 
-				heap_freetuple(tup);
-			}
+			heap_freetuple(tup);
 		}
 	}
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index f84da80..f68e616 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4402,6 +4402,42 @@ _copyReassignOwnedStmt(const ReassignOwnedStmt *from)
 	return newnode;
 }
 
+static DictMapElem *
+_copyDictMapElem(const DictMapElem *from)
+{
+	DictMapElem *newnode = makeNode(DictMapElem);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(data);
+
+	return newnode;
+}
+
+static DictMapExprElem *
+_copyDictMapExprElem(const DictMapExprElem *from)
+{
+	DictMapExprElem *newnode = makeNode(DictMapExprElem);
+
+	COPY_NODE_FIELD(left);
+	COPY_NODE_FIELD(right);
+	COPY_SCALAR_FIELD(oper);
+
+	return newnode;
+}
+
+static DictMapCase *
+_copyDictMapCase(const DictMapCase *from)
+{
+	DictMapCase *newnode = makeNode(DictMapCase);
+
+	COPY_NODE_FIELD(condition);
+	COPY_NODE_FIELD(command);
+	COPY_NODE_FIELD(elsebranch);
+	COPY_SCALAR_FIELD(match);
+
+	return newnode;
+}
+
 static AlterTSDictionaryStmt *
 _copyAlterTSDictionaryStmt(const AlterTSDictionaryStmt *from)
 {
@@ -5409,6 +5445,15 @@ copyObjectImpl(const void *from)
 		case T_ReassignOwnedStmt:
 			retval = _copyReassignOwnedStmt(from);
 			break;
+		case T_DictMapExprElem:
+			retval = _copyDictMapExprElem(from);
+			break;
+		case T_DictMapElem:
+			retval = _copyDictMapElem(from);
+			break;
+		case T_DictMapCase:
+			retval = _copyDictMapCase(from);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _copyAlterTSDictionaryStmt(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index ee8d925..19dfc75 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2190,6 +2190,36 @@ _equalReassignOwnedStmt(const ReassignOwnedStmt *a, const ReassignOwnedStmt *b)
 }
 
 static bool
+_equalDictMapElem(const DictMapElem *a, const DictMapElem *b)
+{
+	COMPARE_NODE_FIELD(data);
+	COMPARE_SCALAR_FIELD(kind);
+
+	return true;
+}
+
+static bool
+_equalDictMapExprElem(const DictMapExprElem *a, const DictMapExprElem *b)
+{
+	COMPARE_NODE_FIELD(left);
+	COMPARE_NODE_FIELD(right);
+	COMPARE_SCALAR_FIELD(oper);
+
+	return true;
+}
+
+static bool
+_equalDictMapCase(const DictMapCase *a, const DictMapCase *b)
+{
+	COMPARE_NODE_FIELD(condition);
+	COMPARE_NODE_FIELD(command);
+	COMPARE_NODE_FIELD(elsebranch);
+	COMPARE_SCALAR_FIELD(match);
+
+	return true;
+}
+
+static bool
 _equalAlterTSDictionaryStmt(const AlterTSDictionaryStmt *a, const AlterTSDictionaryStmt *b)
 {
 	COMPARE_NODE_FIELD(dictname);
@@ -3541,6 +3571,15 @@ equal(const void *a, const void *b)
 		case T_ReassignOwnedStmt:
 			retval = _equalReassignOwnedStmt(a, b);
 			break;
+		case T_DictMapExprElem:
+			retval = _equalDictMapExprElem(a, b);
+			break;
+		case T_DictMapElem:
+			retval = _equalDictMapElem(a, b);
+			break;
+		case T_DictMapCase:
+			retval = _equalDictMapCase(a, b);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _equalAlterTSDictionaryStmt(a, b);
 			break;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 06c03df..13d1f03 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -52,6 +52,7 @@
 #include "catalog/namespace.h"
 #include "catalog/pg_am.h"
 #include "catalog/pg_trigger.h"
+#include "catalog/pg_ts_config_map.h"
 #include "commands/defrem.h"
 #include "commands/trigger.h"
 #include "nodes/makefuncs.h"
@@ -241,6 +242,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionSpec		*partspec;
 	PartitionBoundSpec	*partboundspec;
 	RoleSpec			*rolespec;
+	DictMapElem			*dmapelem;
 }
 
 %type <node>	stmt schema_stmt
@@ -309,7 +311,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				analyze_option_list analyze_option_elem
 %type <boolean>	opt_or_replace
 				opt_grant_grant_option opt_grant_admin_option
-				opt_nowait opt_if_exists opt_with_data
+				opt_nowait opt_if_exists opt_with_data opt_dictionary_map_no
 %type <ival>	opt_nowait_or_skip
 
 %type <list>	OptRoleList AlterOptRoleList
@@ -584,6 +586,12 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>		hash_partbound partbound_datum_list range_datum_list
 %type <defelt>		hash_partbound_elem
 
+%type <ival>		dictionary_map_set_expr_operator
+%type <dmapelem>	dictionary_map_dict dictionary_map_command_expr_paren
+					dictionary_map_set_expr dictionary_map_case
+					dictionary_map_action opt_dictionary_map_case_else
+					dictionary_config dictionary_config_comma
+
 /*
  * Non-keyword token types.  These are hard-wired into the "flex" lexer.
  * They must be listed first so that their numeric codes do not depend on
@@ -645,13 +653,14 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	JOIN
 
-	KEY
+	KEEP KEY
 
 	LABEL LANGUAGE LARGE_P LAST_P LATERAL_P
 	LEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL
 	LOCALTIME LOCALTIMESTAMP LOCATION LOCK_P LOCKED LOGGED
 
-	MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE MONTH_P MOVE
+	MAP MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE
+	MONTH_P MOVE
 
 	NAME_P NAMES NATIONAL NATURAL NCHAR NEW NEXT NO NONE
 	NOT NOTHING NOTIFY NOTNULL NOWAIT NULL_P NULLIF
@@ -10353,24 +10362,26 @@ AlterTSDictionaryStmt:
 		;
 
 AlterTSConfigurationStmt:
-			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with any_name_list
+			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ADD_MAPPING;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = false;
 					n->replace = false;
 					$$ = (Node*)n;
 				}
-			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with any_name_list
+			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ALTER_MAPPING_FOR_TOKEN;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = true;
 					n->replace = false;
 					$$ = (Node*)n;
@@ -10422,6 +10433,117 @@ any_with:	WITH									{}
 			| WITH_LA								{}
 		;
 
+opt_dictionary_map_no:
+			NO { $$ = true; }
+			| { $$ = false; }
+		;
+
+dictionary_config_comma:
+			dictionary_map_dict { $$ = $1; }
+			| dictionary_map_dict ',' dictionary_config_comma
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = TSMAP_OP_COMMA;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_config:
+			dictionary_map_set_expr { $$ = $1; }
+			| dictionary_map_dict ',' dictionary_config_comma
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = TSMAP_OP_COMMA;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_action:
+			KEEP
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_KEEP;
+				n->data = NULL;
+				$$ = n;
+			}
+			| dictionary_map_set_expr { $$ = $1; }
+		;
+
+opt_dictionary_map_case_else:
+			ELSE dictionary_map_set_expr { $$ = $2; }
+			| { $$ = NULL; }
+		;
+
+dictionary_map_case:
+			CASE dictionary_map_set_expr WHEN opt_dictionary_map_no MATCH THEN dictionary_map_action opt_dictionary_map_case_else END_P
+			{
+				DictMapCase *n = makeNode(DictMapCase);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->condition = $2;
+				n->command = $7;
+				n->elsebranch = $8;
+				n->match = !$4;
+
+				r->kind = DICT_MAP_CASE;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_set_expr_operator:
+			UNION { $$ = TSMAP_OP_UNION; }
+			| EXCEPT { $$ = TSMAP_OP_EXCEPT; }
+			| INTERSECT { $$ = TSMAP_OP_INTERSECT; }
+			| MAP { $$ = TSMAP_OP_MAP; }
+		;
+
+dictionary_map_set_expr:
+			dictionary_map_command_expr_paren { $$ = $1; }
+			| dictionary_map_set_expr dictionary_map_set_expr_operator dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = $2;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_command_expr_paren:
+			'(' dictionary_map_set_expr ')'	{ $$ = $2; }
+			| dictionary_map_dict			{ $$ = $1; }
+			| dictionary_map_case			{ $$ = $1; }
+		;
+
+dictionary_map_dict:
+			any_name
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_DICTIONARY;
+				n->data = $1;
+				$$ = n;
+			}
+		;
 
 /*****************************************************************************
  *
@@ -15091,6 +15213,7 @@ unreserved_keyword:
 			| LOCK_P
 			| LOCKED
 			| LOGGED
+			| MAP
 			| MAPPING
 			| MATCH
 			| MATERIALIZED
@@ -15397,6 +15520,7 @@ reserved_keyword:
 			| INITIALLY
 			| INTERSECT
 			| INTO
+			| KEEP
 			| LATERAL_P
 			| LEADING
 			| LIMIT
diff --git a/src/backend/tsearch/Makefile b/src/backend/tsearch/Makefile
index 227468a..e61ad4f 100644
--- a/src/backend/tsearch/Makefile
+++ b/src/backend/tsearch/Makefile
@@ -26,7 +26,7 @@ DICTFILES_PATH=$(addprefix dicts/,$(DICTFILES))
 OBJS = ts_locale.o ts_parse.o wparser.o wparser_def.o dict.o \
 	dict_simple.o dict_synonym.o dict_thesaurus.o \
 	dict_ispell.o regis.o spell.o \
-	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o
+	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o ts_configmap.o
 
 include $(top_srcdir)/src/backend/common.mk
 
diff --git a/src/backend/tsearch/ts_configmap.c b/src/backend/tsearch/ts_configmap.c
new file mode 100644
index 0000000..51860ff
--- /dev/null
+++ b/src/backend/tsearch/ts_configmap.c
@@ -0,0 +1,1094 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.c
+ *		internal representation of text search configuration and utilities for it
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/tsearch/ts_confimap.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include <ctype.h>
+
+#include "access/heapam.h"
+#include "access/genam.h"
+#include "access/htup_details.h"
+#include "access/sysattr.h"
+#include "catalog/indexing.h"
+#include "catalog/pg_ts_dict.h"
+#include "catalog/pg_namespace.h"
+#include "catalog/namespace.h"
+#include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "utils/fmgroids.h"
+
+/*
+ * Size selected arbitrary, based on assumption that 1024 frames of stack
+ * is enough for parsing of configurations
+ */
+#define JSONB_PARSE_STATE_STACK_SIZE 1024
+
+/*
+ * Used during the parsing of TSMapElement from JSONB into internal
+ * data structures.
+ */
+typedef enum TSMapParseState
+{
+	TSMPS_WAIT_ELEMENT,
+	TSMPS_READ_DICT_OID,
+	TSMPS_READ_COMPLEX_OBJ,
+	TSMPS_READ_EXPRESSION,
+	TSMPS_READ_CASE,
+	TSMPS_READ_OPERATOR,
+	TSMPS_READ_COMMAND,
+	TSMPS_READ_CONDITION,
+	TSMPS_READ_ELSEBRANCH,
+	TSMPS_READ_MATCH,
+	TSMPS_READ_KEEP,
+	TSMPS_READ_LEFT,
+	TSMPS_READ_RIGHT
+} TSMapParseState;
+
+/*
+ * Context used during JSONB parsing to construct a TSMap
+ */
+typedef struct TSMapJsonbParseData
+{
+	TSMapParseState states[JSONB_PARSE_STATE_STACK_SIZE];	/* Stack of states of
+															 * JSONB parsing
+															 * automaton */
+	int			statesIndex;	/* Index of current stack frame */
+	TSMapElement *element;		/* Element that is in construction now */
+} TSMapJsonbParseData;
+
+static JsonbValue *TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState);
+static TSMapElement * JsonbToTSMapElement(JsonbContainer *root);
+
+/*
+ * Print name of the namespace into StringInfo variable result
+ */
+static void
+TSMapPrintNamespace(Oid  namespaceId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_namespace namespace;
+
+	if (false)
+		return;
+
+	maprel = heap_open(NamespaceRelationId, AccessShareLock);
+	mapidx = index_open(NamespaceOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(namespaceId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	namespace = (Form_pg_namespace) GETSTRUCT(maptup);
+	appendStringInfoString(result, namespace->nspname.data);
+	appendStringInfoChar(result, '.');
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print name of the dictionary into StringInfo variable result
+ */
+void
+TSMapPrintDictName(Oid dictId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_ts_dict dict;
+
+	if (false)
+		return;
+maprel = heap_open(TSDictionaryRelationId, AccessShareLock);
+	mapidx = index_open(TSDictionaryOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(dictId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	dict = (Form_pg_ts_dict) GETSTRUCT(maptup);
+	if (!TSDictionaryIsVisible(dictId))
+	{
+		TSMapPrintNamespace(dict->dictnamespace, result);
+	}
+	appendStringInfoString(result, dict->dictname.data);
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print the expression into StringInfo variable result
+ */
+static void
+TSMapPrintExpression(TSMapExpression *expression, StringInfo result)
+{
+
+	if (expression->left)
+		TSMapPrintElement(expression->left, result);
+
+	switch (expression->operator)
+	{
+		case TSMAP_OP_UNION:
+			appendStringInfoString(result, " UNION ");
+			break;
+		case TSMAP_OP_EXCEPT:
+			appendStringInfoString(result, " EXCEPT ");
+			break;
+		case TSMAP_OP_INTERSECT:
+			appendStringInfoString(result, " INTERSECT ");
+			break;
+		case TSMAP_OP_COMMA:
+			appendStringInfoString(result, ", ");
+			break;
+		case TSMAP_OP_MAP:
+			appendStringInfoString(result, " MAP ");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains invalid expression operator.")));
+			break;
+	}
+
+	if (expression->right)
+		TSMapPrintElement(expression->right, result);
+}
+
+/*
+ * Print the case configuration construction into StringInfo variable result
+ */
+static void
+TSMapPrintCase(TSMapCase *caseObject, StringInfo result)
+{
+	appendStringInfoString(result, "CASE ");
+
+	TSMapPrintElement(caseObject->condition, result);
+
+	appendStringInfoString(result, " WHEN ");
+	if (!caseObject->match)
+		appendStringInfoString(result, "NO ");
+	appendStringInfoString(result, "MATCH THEN ");
+
+	TSMapPrintElement(caseObject->command, result);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		appendStringInfoString(result, "\nELSE ");
+		TSMapPrintElement(caseObject->elsebranch, result);
+	}
+	appendStringInfoString(result, "\nEND");
+}
+
+/*
+ * Print the element into StringInfo result.
+ * Uses other function and serves for element type detection.
+ */
+void
+TSMapPrintElement(TSMapElement *element, StringInfo result)
+{
+	switch (element->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapPrintExpression(element->value.objectExpression, result);
+			break;
+		case TSMAP_DICTIONARY:
+			TSMapPrintDictName(element->value.objectDictionary, result);
+			break;
+		case TSMAP_CASE:
+			TSMapPrintCase(element->value.objectCase, result);
+			break;
+		case TSMAP_KEEP:
+			appendStringInfoString(result, "KEEP");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains elements with invalid type.")));
+			break;
+	}
+}
+
+/*
+ * Print the text search configuration as a text.
+ */
+Datum
+dictionary_mapping_to_text(PG_FUNCTION_ARGS)
+{
+	Oid			cfgOid = PG_GETARG_OID(0);
+	int32		tokentype = PG_GETARG_INT32(1);
+	StringInfo	rawResult;
+	text	   *result = NULL;
+	TSConfigCacheEntry *cacheEntry;
+
+	cacheEntry = lookup_ts_config_cache(cfgOid);
+	rawResult = makeStringInfo();
+	initStringInfo(rawResult);
+
+	if (cacheEntry->lenmap > tokentype && cacheEntry->map[tokentype] != NULL)
+	{
+		TSMapElement *element = cacheEntry->map[tokentype];
+
+		TSMapPrintElement(element, rawResult);
+	}
+
+	result = cstring_to_text(rawResult->data);
+	pfree(rawResult);
+	PG_RETURN_TEXT_P(result);
+}
+
+/* ----------------
+ * Functions used to convert TSMap structure into JSONB representation
+ * ----------------
+ */
+
+/*
+ * Convert an integer value into JsonbValue
+ */
+static JsonbValue *
+IntToJsonbValue(int intValue)
+{
+	char		buffer[16];
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	/*
+	 * String size is based on limit of int capacity up to 12 chars with sign
+	 * and NULL-character
+	 */
+	memset(buffer, 0, sizeof(char) * 12);
+
+	pg_ltoa(intValue, buffer);
+	value->type = jbvNumeric;
+	value->val.numeric = DatumGetNumeric(DirectFunctionCall3(numeric_in,
+															 CStringGetDatum(buffer),
+															 ObjectIdGetDatum(InvalidOid),
+															 Int32GetDatum(-1)
+															 ));
+	return value;
+}
+
+/*
+ * Convert a FTS configuration expression into JsonbValue
+ */
+static JsonbValue *
+TSMapExpressionToJsonbValue(TSMapExpression *expression, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("operator");
+	key.val.string.val = "operator";
+	value = IntToJsonbValue(expression->operator);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("left");
+	key.val.string.val = "left";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->left, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("right");
+	key.val.string.val = "right";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->right, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS configuration case into JsonbValue
+ */
+static JsonbValue *
+TSMapCaseToJsonbValue(TSMapCase *caseObject, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("condition");
+	key.val.string.val = "condition";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->condition, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("command");
+	key.val.string.val = "command";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->command, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		key.type = jbvString;
+		key.val.string.len = strlen("elsebranch");
+		key.val.string.val = "elsebranch";
+
+		pushJsonbValue(&jsonbState, WJB_KEY, &key);
+		value = TSMapElementToJsonbValue(caseObject->elsebranch, jsonbState);
+
+		if (value && IsAJsonbScalar(value))
+			pushJsonbValue(&jsonbState, WJB_VALUE, value);
+	}
+
+	key.type = jbvString;
+	key.val.string.len = strlen("match");
+	key.val.string.val = "match";
+
+	value = IntToJsonbValue(caseObject->match ? 1 : 0);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS KEEP command into JsonbValue
+ */
+static JsonbValue *
+TSMapKeepToJsonbValue(JsonbParseState *jsonbState)
+{
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	value->type = jbvString;
+	value->val.string.len = strlen("keep");
+	value->val.string.val = "keep";
+
+	return pushJsonbValue(&jsonbState, WJB_VALUE, value);
+}
+
+/*
+ * Convert a FTS element into JsonbValue. Common point for all types of TSMapElement
+ */
+JsonbValue *
+TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState)
+{
+	JsonbValue *result = NULL;
+
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_EXPRESSION:
+				result = TSMapExpressionToJsonbValue(element->value.objectExpression, jsonbState);
+				break;
+			case TSMAP_DICTIONARY:
+				result = IntToJsonbValue(element->value.objectDictionary);
+				break;
+			case TSMAP_CASE:
+				result = TSMapCaseToJsonbValue(element->value.objectCase, jsonbState);
+				break;
+			case TSMAP_KEEP:
+				result = TSMapKeepToJsonbValue(jsonbState);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Required text search configuration contains elements with invalid type.")));
+				break;
+		}
+	}
+	return result;
+}
+
+/*
+ * Convert a FTS configuration into JSONB
+ */
+Jsonb *
+TSMapToJsonb(TSMapElement *element)
+{
+	JsonbParseState *jsonbState = NULL;
+	JsonbValue *out;
+	Jsonb	   *result;
+
+	out = TSMapElementToJsonbValue(element, jsonbState);
+
+	result = JsonbValueToJsonb(out);
+	return result;
+}
+
+/* ----------------
+ * Functions used to get TSMap structure from JSONB representation
+ * ----------------
+ */
+
+/*
+ * Extract an integer from JsonbValue
+ */
+static int
+JsonbValueToInt(JsonbValue *value)
+{
+	char	   *str;
+
+	str = DatumGetCString(DirectFunctionCall1(numeric_out, NumericGetDatum(value->val.numeric)));
+	return pg_atoi(str, sizeof(int), 0);
+}
+
+/*
+ * Check is a key one of FTS configuration case fields
+ */
+static bool
+IsTSMapCaseKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "match") == 0 || strcmp(key, "condition") == 0 || strcmp(key, "command") == 0 || strcmp(key, "elsebranch") == 0;
+}
+
+/*
+ * Check is a key one of FTS configuration expression fields
+ */
+static bool
+IsTSMapExpressionKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "operator") == 0 || strcmp(key, "left") == 0 || strcmp(key, "right") == 0;
+}
+
+/*
+ * Configure parseData->element according to value (key)
+ */
+static void
+JsonbBeginObjectKey(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *parentElement = parseData->element;
+
+	parseData->element = palloc0(sizeof(TSMapElement));
+	parseData->element->parent = parentElement;
+
+	/* Overwrite object-type state based on key */
+	if (IsTSMapExpressionKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_EXPRESSION;
+		parseData->element->type = TSMAP_EXPRESSION;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapExpression));
+	}
+	else if (IsTSMapCaseKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CASE;
+		parseData->element->type = TSMAP_CASE;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapCase));
+	}
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration expression
+ */
+static void
+JsonbKeyExpressionProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "operator") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_OPERATOR;
+	else if (strcmp(key, "left") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_LEFT;
+	else if (strcmp(key, "right") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_RIGHT;
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration case
+ */
+static void
+JsonbKeyCaseProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "condition") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CONDITION;
+	else if (strcmp(key, "command") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_COMMAND;
+	else if (strcmp(key, "elsebranch") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_ELSEBRANCH;
+	else if (strcmp(key, "match") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_MATCH;
+}
+
+/*
+ * Convert a JsonbValue into OID TSMapElement
+ */
+static TSMapElement *
+JsonbValueToOidElement(JsonbValue *value, TSMapElement *parent)
+{
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	element->type = TSMAP_DICTIONARY;
+	element->value.objectDictionary = JsonbValueToInt(value);
+	return element;
+}
+
+/*
+ * Convert a JsonbValue into string TSMapElement.
+ * Used for special values such as KEEP command
+ */
+static TSMapElement *
+JsonbValueReadString(JsonbValue *value, TSMapElement *parent)
+{
+	char	   *str;
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	str = palloc0(sizeof(char) * (value->val.string.len + 1));
+	memcpy(str, value->val.string.val, sizeof(char) * value->val.string.len);
+
+	if (strcmp(str, "keep") == 0)
+		element->type = TSMAP_KEEP;
+
+	pfree(str);
+
+	return element;
+}
+
+/*
+ * Process a JsonbValue object
+ */
+static void
+JsonbProcessElement(JsonbIteratorToken r, JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *element = NULL;
+
+	switch (r)
+	{
+		case WJB_KEY:
+
+			/*
+			 * Construct an TSMapElement object. At first key inside JSONB
+			 * object a type is selected based on key.
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMPLEX_OBJ)
+				JsonbBeginObjectKey(value, parseData);
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_EXPRESSION)
+				JsonbKeyExpressionProcessing(value, parseData);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CASE)
+				JsonbKeyCaseProcessing(value, parseData);
+
+			break;
+		case WJB_BEGIN_OBJECT:
+
+			/*
+			 * Begin construction of new object
+			 */
+			parseData->statesIndex++;
+			parseData->states[parseData->statesIndex] = TSMPS_READ_COMPLEX_OBJ;
+			break;
+		case WJB_END_OBJECT:
+
+			/*
+			 * Save constructed object based on current state of parser
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->parent->value.objectExpression->left = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->parent->value.objectExpression->right = parseData->element;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->parent->value.objectCase->condition = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->parent->value.objectCase->command = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->parent->value.objectCase->elsebranch = parseData->element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_VALUE:
+
+			/*
+			 * Save a value inside constructing object
+			 */
+			if (value.type == jbvBinary)
+				element = JsonbToTSMapElement(value.val.binary.data);
+			else if (value.type == jbvString)
+				element = JsonbValueReadString(&value, parseData->element);
+			else if (value.type == jbvNumeric)
+				element = JsonbValueToOidElement(&value, parseData->element);
+			else
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains object with invalid type.")));
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->value.objectCase->condition = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->value.objectCase->command = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->value.objectCase->elsebranch = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_MATCH)
+				parseData->element->value.objectCase->match = JsonbValueToInt(&value) == 1 ? true : false;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_OPERATOR)
+				parseData->element->value.objectExpression->operator = JsonbValueToInt(&value);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->value.objectExpression->left = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->value.objectExpression->right = element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_ELEM:
+
+			/*
+			 * Store a simple element such as dictionary OID
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_WAIT_ELEMENT)
+			{
+				if (parseData->element != NULL)
+					parseData->element = JsonbValueToOidElement(&value, parseData->element->parent);
+				else
+					parseData->element = JsonbValueToOidElement(&value, NULL);
+			}
+			break;
+		default:
+			/* Ignore unused JSONB tokens */
+			break;
+	}
+}
+
+/*
+ * Convert a JsonbContainer into TSMapElement
+ */
+static TSMapElement *
+JsonbToTSMapElement(JsonbContainer *root)
+{
+	TSMapJsonbParseData parseData;
+	JsonbIteratorToken r;
+	JsonbIterator *it;
+	JsonbValue	val;
+
+	parseData.statesIndex = 0;
+	parseData.states[parseData.statesIndex] = TSMPS_WAIT_ELEMENT;
+	parseData.element = NULL;
+
+	it = JsonbIteratorInit(root);
+
+	while ((r = JsonbIteratorNext(&it, &val, true)) != WJB_DONE)
+		JsonbProcessElement(r, val, &parseData);
+
+	return parseData.element;
+}
+
+/*
+ * Convert a JSONB into TSMapElement
+ */
+TSMapElement *
+JsonbToTSMap(Jsonb *json)
+{
+	JsonbContainer *root = &json->root;
+
+	return JsonbToTSMapElement(root);
+}
+
+/* ----------------
+ * Text Search Configuration Map Utils
+ * ----------------
+ */
+
+/*
+ * Dynamically extendable list of OIDs
+ */
+typedef struct OidList
+{
+	Oid		   *data;
+	int			size;			/* Size of data array. Uninitialized elements
+								 * in data filled with InvalidOid */
+} OidList;
+
+/*
+ * Initialize a list
+ */
+static OidList *
+OidListInit()
+{
+	OidList    *result = palloc0(sizeof(OidList));
+
+	result->size = 1;
+	result->data = palloc0(result->size * sizeof(Oid));
+	result->data[0] = InvalidOid;
+	return result;
+}
+
+/*
+ * Add a new OID into list. If it is already stored in list, it won't be add second time.
+ */
+static void
+OidListAdd(OidList *list, Oid oid)
+{
+	int			i;
+
+	/* Search for the Oid in the list */
+	for (i = 0; list->data[i] != InvalidOid; i++)
+		if (list->data[i] == oid)
+			return;
+
+	/* If not found, insert it in the end of the list */
+	if (i >= list->size - 1)
+	{
+		int			j;
+
+		list->size = list->size * 2;
+		list->data = repalloc(list->data, sizeof(Oid) * list->size);
+
+		for (j = i; j < list->size; j++)
+			list->data[j] = InvalidOid;
+	}
+	list->data[i] = oid;
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement.
+ * Used for internal recursive calls.
+ */
+static void
+TSMapGetDictionariesInternal(TSMapElement *config, OidList *list)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapGetDictionariesInternal(config->value.objectExpression->left, list);
+			TSMapGetDictionariesInternal(config->value.objectExpression->right, list);
+			break;
+		case TSMAP_CASE:
+			TSMapGetDictionariesInternal(config->value.objectCase->command, list);
+			TSMapGetDictionariesInternal(config->value.objectCase->condition, list);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapGetDictionariesInternal(config->value.objectCase->elsebranch, list);
+			break;
+		case TSMAP_DICTIONARY:
+			OidListAdd(list, config->value.objectDictionary);
+			break;
+	}
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement
+ */
+Oid *
+TSMapGetDictionaries(TSMapElement *config)
+{
+	Oid		   *result;
+	OidList    *list = OidListInit();
+
+	TSMapGetDictionariesInternal(config, list);
+
+	result = list->data;
+	pfree(list);
+
+	return result;
+}
+
+/*
+ * Replace one dictionary OID with another in all instances inside a configuration
+ */
+void
+TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapReplaceDictionary(config->value.objectExpression->left, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectExpression->right, oldDict, newDict);
+			break;
+		case TSMAP_CASE:
+			TSMapReplaceDictionary(config->value.objectCase->command, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectCase->condition, oldDict, newDict);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapReplaceDictionary(config->value.objectCase->elsebranch, oldDict, newDict);
+			break;
+		case TSMAP_DICTIONARY:
+			if (config->value.objectDictionary == oldDict)
+				config->value.objectDictionary = newDict;
+			break;
+	}
+}
+
+/* ----------------
+ * Text Search Configuration Map Memory Management
+ * ----------------
+ */
+
+/*
+ * Move a FTS configuration expression to another memory context
+ */
+static TSMapElement *
+TSMapExpressionMoveToMemoryContext(TSMapExpression *expression, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapExpression *resultExpression = MemoryContextAlloc(context, sizeof(TSMapExpression));
+
+	memset(resultExpression, 0, sizeof(TSMapExpression));
+	result->value.objectExpression = resultExpression;
+	result->type = TSMAP_EXPRESSION;
+
+	resultExpression->operator = expression->operator;
+
+	resultExpression->left = TSMapMoveToMemoryContext(expression->left, context);
+	resultExpression->left->parent = result;
+
+	resultExpression->right = TSMapMoveToMemoryContext(expression->right, context);
+	resultExpression->right->parent = result;
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration case to another memory context
+ */
+static TSMapElement *
+TSMapCaseMoveToMemoryContext(TSMapCase *caseObject, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapCase  *resultCaseObject = MemoryContextAlloc(context, sizeof(TSMapCase));
+
+	memset(resultCaseObject, 0, sizeof(TSMapCase));
+	result->value.objectCase = resultCaseObject;
+	result->type = TSMAP_CASE;
+
+	resultCaseObject->match = caseObject->match;
+
+	resultCaseObject->command = TSMapMoveToMemoryContext(caseObject->command, context);
+	resultCaseObject->command->parent = result;
+
+	resultCaseObject->condition = TSMapMoveToMemoryContext(caseObject->condition, context);
+	resultCaseObject->condition->parent = result;
+
+	if (caseObject->elsebranch != NULL)
+	{
+		resultCaseObject->elsebranch = TSMapMoveToMemoryContext(caseObject->elsebranch, context);
+		resultCaseObject->elsebranch->parent = result;
+	}
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration to another memory context
+ */
+TSMapElement *
+TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context)
+{
+	TSMapElement *result = NULL;
+
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			result = TSMapExpressionMoveToMemoryContext(config->value.objectExpression, context);
+			break;
+		case TSMAP_CASE:
+			result = TSMapCaseMoveToMemoryContext(config->value.objectCase, context);
+			break;
+		case TSMAP_DICTIONARY:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_DICTIONARY;
+			result->value.objectDictionary = config->value.objectDictionary;
+			break;
+		case TSMAP_KEEP:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_KEEP;
+			result->value.object = NULL;
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains object with invalid type.")));
+			break;
+	}
+
+	return result;
+}
+
+/*
+ * Free memory occupied by FTS configuration expression
+ */
+static void
+TSMapExpressionFree(TSMapExpression *expression)
+{
+	if (expression->left)
+		TSMapElementFree(expression->left);
+	if (expression->right)
+		TSMapElementFree(expression->right);
+	pfree(expression);
+}
+
+/*
+ * Free memory occupied by FTS configuration case
+ */
+static void
+TSMapCaseFree(TSMapCase *caseObject)
+{
+	TSMapElementFree(caseObject->condition);
+	TSMapElementFree(caseObject->command);
+	TSMapElementFree(caseObject->elsebranch);
+	pfree(caseObject);
+}
+
+/*
+ * Free memory occupied by FTS configuration element
+ */
+void
+TSMapElementFree(TSMapElement *element)
+{
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_CASE:
+				TSMapCaseFree(element->value.objectCase);
+				break;
+			case TSMAP_EXPRESSION:
+				TSMapExpressionFree(element->value.objectExpression);
+				break;
+		}
+		pfree(element);
+	}
+}
+
+/*
+ * Do a deep comparison of two TSMapElements. Doesn't check parents of elements
+ */
+bool
+TSMapElementEquals(TSMapElement *a, TSMapElement *b)
+{
+	bool		result = true;
+
+	if (a->type == b->type)
+	{
+		switch (a->type)
+		{
+			case TSMAP_CASE:
+				if (!TSMapElementEquals(a->value.objectCase->condition, b->value.objectCase->condition))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectCase->command, b->value.objectCase->command))
+					result = false;
+
+				if (a->value.objectCase->elsebranch != NULL && b->value.objectCase->elsebranch != NULL)
+				{
+					if (!TSMapElementEquals(a->value.objectCase->elsebranch, b->value.objectCase->elsebranch))
+						result = false;
+				}
+				else if (a->value.objectCase->elsebranch != NULL || b->value.objectCase->elsebranch != NULL)
+					result = false;
+
+				if (a->value.objectCase->match != b->value.objectCase->match)
+					result = false;
+				break;
+			case TSMAP_EXPRESSION:
+				if (!TSMapElementEquals(a->value.objectExpression->left, b->value.objectExpression->left))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectExpression->right, b->value.objectExpression->right))
+					result = false;
+				if (a->value.objectExpression->operator != b->value.objectExpression->operator)
+					result = false;
+				break;
+			case TSMAP_DICTIONARY:
+				result = a->value.objectDictionary == b->value.objectDictionary;
+				break;
+			case TSMAP_KEEP:
+				result = true;
+		}
+	}
+	else
+		result = false;
+
+	return result;
+}
diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c
index 7b69ef5..f476abb 100644
--- a/src/backend/tsearch/ts_parse.c
+++ b/src/backend/tsearch/ts_parse.c
@@ -16,58 +16,157 @@
 
 #include "tsearch/ts_cache.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "funcapi.h"
 
 #define IGNORE_LONGLEXEME	1
 
-/*
+/*-------------------
  * Lexize subsystem
+ *-------------------
  */
 
+/*
+ * Representation of token produced by FTS parser. It contains intermediate
+ * lexemes in case of phrase dictionary processing.
+ */
 typedef struct ParsedLex
 {
-	int			type;
-	char	   *lemm;
-	int			lenlemm;
-	struct ParsedLex *next;
+	int			type;			/* Token type */
+	char	   *lemm;			/* Token itself */
+	int			lenlemm;		/* Length of the token string */
+	int			maplen;			/* Length of the map */
+	bool	   *accepted;		/* Is accepted by some dictionary */
+	bool	   *rejected;		/* Is rejected by all dictionaries */
+	bool	   *notFinished;	/* Some dictionary not finished processing and
+								 * waits for more tokens */
+	struct ParsedLex *next;		/* Next token in the list */
+	TSMapElement *relatedRule;	/* Rule which is used to produce lexemes from
+								 * the token */
 } ParsedLex;
 
+/*
+ * List of tokens produced by FTS parser.
+ */
 typedef struct ListParsedLex
 {
 	ParsedLex  *head;
 	ParsedLex  *tail;
 } ListParsedLex;
 
-typedef struct
+/*
+ * Dictionary state shared between processing of different tokens
+ */
+typedef struct DictState
 {
-	TSConfigCacheEntry *cfg;
-	Oid			curDictId;
-	int			posDict;
-	DictSubState dictState;
-	ParsedLex  *curSub;
-	ListParsedLex towork;		/* current list to work */
-	ListParsedLex waste;		/* list of lexemes that already lexized */
+	Oid			relatedDictionary;	/* DictState contains state of dictionary
+									 * with this Oid */
+	DictSubState subState;		/* Internal state of the dictionary used to
+								 * store some state between dictionary calls */
+	ListParsedLex acceptedTokens;	/* Tokens which are processed and
+									 * accepted, used in last returned result
+									 * by the dictionary */
+	ListParsedLex intermediateTokens;	/* Tokens which are not accepted, but
+										 * were processed by thesaurus-like
+										 * dictionary */
+	bool		storeToAccepted;	/* Should current token be appended to
+									 * accepted or intermediate tokens */
+	bool		processed;		/* Is the dictionary take control during
+								 * current token processing */
+	TSLexeme   *tmpResult;		/* Last result returned by thesaurus-like
+								 * dictionary, if dictionary still waiting for
+								 * more lexemes */
+} DictState;
 
-	/*
-	 * fields to store last variant to lexize (basically, thesaurus or similar
-	 * to, which wants	several lexemes
-	 */
+/*
+ * List of dictionary states
+ */
+typedef struct DictStateList
+{
+	int			listLength;
+	DictState  *states;
+} DictStateList;
 
-	ParsedLex  *lastRes;
-	TSLexeme   *tmpRes;
+/*
+ * Buffer entry with lexemes produced from current token
+ */
+typedef struct LexemesBufferEntry
+{
+	TSMapElement *key;	/* Element of the mapping configuration produced the entry */
+	ParsedLex  *token;	/* Token used for production of the lexemes */
+	TSLexeme   *data;	/* Lexemes produced from current token */
+} LexemesBufferEntry;
+
+/*
+ * Buffer with lexemes produced from current token
+ */
+typedef struct LexemesBuffer
+{
+	int			size;
+	LexemesBufferEntry *data;
+} LexemesBuffer;
+
+/*
+ * Storage for accepted and possible accepted lexemes
+ */
+typedef struct ResultStorage
+{
+	TSLexeme   *lexemes;		/* Processed lexemes, which is not yet
+								 * accepted */
+	TSLexeme   *accepted;		/* Already accepted lexemes */
+} ResultStorage;
+
+/*
+ * FTS processing context
+ */
+typedef struct LexizeData
+{
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	DictStateList dslist;		/* List of all currently stored states of
+								 * dictionaries */
+	ListParsedLex towork;		/* Current list to work */
+	ListParsedLex waste;		/* List of lexemes that already lexized */
+	LexemesBuffer buffer;		/* Buffer of processed lexemes. Used to avoid
+								 * multiple execution of token lexize process
+								 * with same parameters */
+	ResultStorage delayedResults;	/* Results that should be returned but may
+									 * be rejected in future */
+	Oid			skipDictionary; /* The dictionary we should skip during
+								 * processing. Used to avoid infinite loop in
+								 * configuration with phrase dictionary */
+	bool		debugContext;	/* If true, relatedRule attribute is filled */
 } LexizeData;
 
-static void
-LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+/*
+ * FTS processing debug context. Used during ts_debug calls.
+ */
+typedef struct TSDebugContext
 {
-	ld->cfg = cfg;
-	ld->curDictId = InvalidOid;
-	ld->posDict = 0;
-	ld->towork.head = ld->towork.tail = ld->curSub = NULL;
-	ld->waste.head = ld->waste.tail = NULL;
-	ld->lastRes = NULL;
-	ld->tmpRes = NULL;
-}
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	TSParserCacheEntry *prsobj; /* Parser context of current ts_debug context */
+	LexDescr   *tokenTypes;		/* Token types supported by current parser */
+	void	   *prsdata;		/* Parser data of current ts_debug context */
+	LexizeData	ldata;			/* Lexize data of current ts_debug context */
+	int			tokentype;		/* Last token tokentype */
+	TSLexeme   *savedLexemes;	/* Last token lexemes stored for ts_debug
+								 * output */
+	ParsedLex  *leftTokens;		/* Corresponded ParsedLex */
+} TSDebugContext;
+
+static TSLexeme *TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression);
+static TSLexeme *LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config);
+
+/*-------------------
+ * ListParsedLex API
+ *-------------------
+ */
 
+/*
+ * Add a ParsedLex to the end of the list
+ */
 static void
 LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 {
@@ -81,274 +180,1291 @@ LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 	newpl->next = NULL;
 }
 
-static ParsedLex *
-LPLRemoveHead(ListParsedLex *list)
-{
-	ParsedLex  *res = list->head;
+/*
+ * Add a copy of ParsedLex to the end of the list
+ */
+static void
+LPLAddTailCopy(ListParsedLex *list, ParsedLex *newpl)
+{
+	ParsedLex  *copy = palloc0(sizeof(ParsedLex));
+
+	copy->lenlemm = newpl->lenlemm;
+	copy->type = newpl->type;
+	copy->lemm = newpl->lemm;
+	copy->relatedRule = newpl->relatedRule;
+	copy->next = NULL;
+
+	if (list->tail)
+	{
+		list->tail->next = copy;
+		list->tail = copy;
+	}
+	else
+		list->head = list->tail = copy;
+}
+
+/*
+ * Remove the head of the list. Return pointer to detached head
+ */
+static ParsedLex *
+LPLRemoveHead(ListParsedLex *list)
+{
+	ParsedLex  *res = list->head;
+
+	if (list->head)
+		list->head = list->head->next;
+
+	if (list->head == NULL)
+		list->tail = NULL;
+
+	return res;
+}
+
+/*
+ * Remove all ParsedLex from the list
+ */
+static void
+LPLClear(ListParsedLex *list)
+{
+	ParsedLex  *tmp,
+			   *ptr = list->head;
+
+	while (ptr)
+	{
+		tmp = ptr->next;
+		pfree(ptr);
+		ptr = tmp;
+	}
+
+	list->head = list->tail = NULL;
+}
+
+/*-------------------
+ * LexizeData manipulation functions
+ *-------------------
+ */
+
+/*
+ * Initialize empty LexizeData object
+ */
+static void
+LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+{
+	ld->cfg = cfg;
+	ld->skipDictionary = InvalidOid;
+	ld->towork.head = ld->towork.tail = NULL;
+	ld->waste.head = ld->waste.tail = NULL;
+	ld->dslist.listLength = 0;
+	ld->dslist.states = NULL;
+	ld->buffer.size = 0;
+	ld->buffer.data = NULL;
+	ld->delayedResults.lexemes = NULL;
+	ld->delayedResults.accepted = NULL;
+}
+
+/*
+ * Add a token to the processing queue
+ */
+static void
+LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
+{
+	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+
+	newpl->type = type;
+	newpl->lemm = lemm;
+	newpl->lenlemm = lenlemm;
+	newpl->relatedRule = NULL;
+	LPLAddTail(&ld->towork, newpl);
+}
+
+/*
+ * Remove head of the processing queue
+ */
+static void
+RemoveHead(LexizeData *ld)
+{
+	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+}
+
+/*
+ * Set token corresponded to current lexeme
+ */
+static void
+setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+{
+	if (correspondLexem)
+		*correspondLexem = ld->waste.head;
+	else
+		LPLClear(&ld->waste);
+
+	ld->waste.head = ld->waste.tail = NULL;
+}
+
+/*-------------------
+ * DictState manipulation functions
+ *-------------------
+ */
+
+/*
+ * Get a state of dictionary based on its OID
+ */
+static DictState *
+DictStateListGet(DictStateList *list, Oid dictId)
+{
+	int			i;
+	DictState  *result = NULL;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			result = &list->states[i];
+
+	return result;
+}
+
+/*
+ * Remove a state of dictionary based on its OID
+ */
+static void
+DictStateListRemove(DictStateList *list, Oid dictId)
+{
+	int			i;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			break;
+
+	if (i != list->listLength)
+	{
+		memcpy(list->states + i, list->states + i + 1, sizeof(DictState) * (list->listLength - i - 1));
+		list->listLength--;
+		if (list->listLength == 0)
+			list->states = NULL;
+		else
+			list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	}
+}
+
+/*
+ * Insert a state of dictionary with specified OID
+ */
+static DictState *
+DictStateListAdd(DictStateList *list, DictState *state)
+{
+	DictStateListRemove(list, state->relatedDictionary);
+
+	list->listLength++;
+	if (list->states)
+		list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	else
+		list->states = palloc0(sizeof(DictState) * list->listLength);
+
+	memcpy(list->states + list->listLength - 1, state, sizeof(DictState));
+
+	return list->states + list->listLength - 1;
+}
+
+/*
+ * Remove states of all dictionaries
+ */
+static void
+DictStateListClear(DictStateList *list)
+{
+	list->listLength = 0;
+	if (list->states)
+		pfree(list->states);
+	list->states = NULL;
+}
+
+/*-------------------
+ * LexemesBuffer manipulation functions
+ *-------------------
+ */
+
+/*
+ * Check if there is a saved lexeme generated by specified TSMapElement
+ */
+static bool
+LexemesBufferContains(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			return true;
+
+	return false;
+}
+
+/*
+ * Get a saved lexeme generated by specified TSMapElement
+ */
+static TSLexeme *
+LexemesBufferGet(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+	TSLexeme   *result = NULL;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			result = buffer->data[i].data;
+
+	return result;
+}
+
+/*
+ * Remove a saved lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferRemove(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			break;
+
+	if (i != buffer->size)
+	{
+		memcpy(buffer->data + i, buffer->data + i + 1, sizeof(LexemesBufferEntry) * (buffer->size - i - 1));
+		buffer->size--;
+		if (buffer->size == 0)
+			buffer->data = NULL;
+		else
+			buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	}
+}
+
+/*
+ * Same a lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferAdd(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token, TSLexeme *data)
+{
+	LexemesBufferRemove(buffer, key, token);
+
+	buffer->size++;
+	if (buffer->data)
+		buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	else
+		buffer->data = palloc0(sizeof(LexemesBufferEntry) * buffer->size);
+
+	buffer->data[buffer->size - 1].token = token;
+	buffer->data[buffer->size - 1].key = key;
+	buffer->data[buffer->size - 1].data = data;
+}
+
+/*
+ * Remove all lexemes saved in a buffer
+ */
+static void
+LexemesBufferClear(LexemesBuffer *buffer)
+{
+	int			i;
+	bool	   *skipEntry = palloc0(sizeof(bool) * buffer->size);
+
+	for (i = 0; i < buffer->size; i++)
+	{
+		if (buffer->data[i].data != NULL && !skipEntry[i])
+		{
+			int			j;
+
+			for (j = 0; j < buffer->size; j++)
+				if (buffer->data[i].data == buffer->data[j].data)
+					skipEntry[j] = true;
+
+			pfree(buffer->data[i].data);
+		}
+	}
+
+	buffer->size = 0;
+	if (buffer->data)
+		pfree(buffer->data);
+	buffer->data = NULL;
+}
+
+/*-------------------
+ * TSLexeme util functions
+ *-------------------
+ */
+
+/*
+ * Get size of TSLexeme except empty-lexeme
+ */
+static int
+TSLexemeGetSize(TSLexeme *lex)
+{
+	int			result = 0;
+	TSLexeme   *ptr = lex;
+
+	while (ptr && ptr->lexeme)
+	{
+		result++;
+		ptr++;
+	}
+
+	return result;
+}
+
+/*
+ * Remove repeated lexemes. Also remove copies of whole nvariant groups.
+ */
+static TSLexeme *
+TSLexemeRemoveDuplications(TSLexeme *lexeme)
+{
+	TSLexeme   *res;
+	int			curLexIndex;
+	int			i;
+	int			lexemeSize = TSLexemeGetSize(lexeme);
+	int			shouldCopyCount = lexemeSize;
+	bool	   *shouldCopy;
+
+	if (lexeme == NULL)
+		return NULL;
+
+	shouldCopy = palloc(sizeof(bool) * lexemeSize);
+	memset(shouldCopy, true, sizeof(bool) * lexemeSize);
+
+	for (curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		for (i = curLexIndex + 1; i < lexemeSize; i++)
+		{
+			if (!shouldCopy[i])
+				continue;
+
+			if (strcmp(lexeme[curLexIndex].lexeme, lexeme[i].lexeme) == 0)
+			{
+				if (lexeme[curLexIndex].nvariant == lexeme[i].nvariant)
+				{
+					shouldCopy[i] = false;
+					shouldCopyCount--;
+					continue;
+				}
+				else
+				{
+					/*
+					 * Check for same set of lexemes in another nvariant
+					 * series
+					 */
+					int			nvariantCountL = 0;
+					int			nvariantCountR = 0;
+					int			nvariantOverlap = 1;
+					int			j;
+
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[curLexIndex].nvariant == lexeme[j].nvariant)
+							nvariantCountL++;
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[i].nvariant == lexeme[j].nvariant)
+							nvariantCountR++;
+
+					if (nvariantCountL != nvariantCountR)
+						continue;
+
+					for (j = 1; j < nvariantCountR; j++)
+					{
+						if (strcmp(lexeme[curLexIndex + j].lexeme, lexeme[i + j].lexeme) == 0
+							&& lexeme[curLexIndex + j].nvariant == lexeme[i + j].nvariant)
+							nvariantOverlap++;
+					}
+
+					if (nvariantOverlap != nvariantCountR)
+						continue;
+
+					for (j = 0; j < nvariantCountR; j++)
+						shouldCopy[i + j] = false;
+				}
+			}
+		}
+	}
+
+	res = palloc0(sizeof(TSLexeme) * (shouldCopyCount + 1));
+
+	for (i = 0, curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		if (shouldCopy[curLexIndex])
+		{
+			memcpy(res + i, lexeme + curLexIndex, sizeof(TSLexeme));
+			i++;
+		}
+	}
+
+	pfree(shouldCopy);
+	return res;
+}
+
+/*
+ * Combine two lexeme lists with respect to positions
+ */
+static TSLexeme *
+TSLexemeMergePositions(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+
+	if (left != NULL || right != NULL)
+	{
+		int			left_i = 0;
+		int			right_i = 0;
+		int			left_max_nvariant = 0;
+		int			i;
+		int			left_size = TSLexemeGetSize(left);
+		int			right_size = TSLexemeGetSize(right);
+
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		for (i = 0; i < right_size; i++)
+			right[i].nvariant += left_max_nvariant;
+		if (right && right[0].flags & TSL_ADDPOS)
+			right[0].flags &= ~TSL_ADDPOS;
+
+		i = 0;
+		while (i < left_size + right_size)
+		{
+			if (left_i < left_size)
+			{
+				do
+				{
+					result[i++] = left[left_i++];
+				} while (left && left[left_i].lexeme && (left[left_i].flags & TSL_ADDPOS) == 0);
+			}
+
+			if (right_i < right_size)
+			{
+				do
+				{
+					result[i++] = right[right_i++];
+				} while (right && right[right_i].lexeme && (right[right_i].flags & TSL_ADDPOS) == 0);
+			}
+		}
+	}
+	return result;
+}
+
+/*
+ * Split lexemes generated by regular dictionaries and multi-input dictionaries
+ * and combine them with respect to positions
+ */
+static TSLexeme *
+TSLexemeFilterMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *result;
+	TSLexeme   *ptr = lexemes;
+	int			multi_lexemes = 0;
+
+	while (ptr && ptr->lexeme)
+	{
+		if (ptr->flags & TSL_MULTI)
+			multi_lexemes++;
+		ptr++;
+	}
+
+	if (multi_lexemes > 0)
+	{
+		TSLexeme   *lexemes_multi = palloc0(sizeof(TSLexeme) * (multi_lexemes + 1));
+		TSLexeme   *lexemes_rest = palloc0(sizeof(TSLexeme) * (TSLexemeGetSize(lexemes) - multi_lexemes + 1));
+		int			rest_i = 0;
+		int			multi_i = 0;
+
+		ptr = lexemes;
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr->flags & TSL_MULTI)
+				lexemes_multi[multi_i++] = *ptr;
+			else
+				lexemes_rest[rest_i++] = *ptr;
+
+			ptr++;
+		}
+		result = TSLexemeMergePositions(lexemes_rest, lexemes_multi);
+	}
+	else
+	{
+		result = TSLexemeMergePositions(lexemes, NULL);
+	}
+
+	return result;
+}
+
+/*
+ * Mark lexemes as generated by multi-input (thesaurus-like) dictionary
+ */
+static void
+TSLexemeMarkMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *ptr = lexemes;
+
+	while (ptr && ptr->lexeme)
+	{
+		ptr->flags |= TSL_MULTI;
+		ptr++;
+	}
+}
+
+/*-------------------
+ * Lexemes set operations
+ *-------------------
+ */
+
+/*
+ * Combine left and right lexeme lists into one.
+ * If append is true, right lexemes added after last left lexeme with TSL_ADDPOS flag
+ */
+static TSLexeme *
+TSLexemeUnionOpt(TSLexeme *left, TSLexeme *right, bool append)
+{
+	TSLexeme   *result;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+	int			left_max_nvariant = 0;
+	int			i;
+
+	if (left == NULL && right == NULL)
+	{
+		result = NULL;
+	}
+	else
+	{
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		if (left_size > 0)
+			memcpy(result, left, sizeof(TSLexeme) * left_size);
+		if (right_size > 0)
+			memcpy(result + left_size, right, sizeof(TSLexeme) * right_size);
+		if (append && left_size > 0 && right_size > 0)
+			result[left_size].flags |= TSL_ADDPOS;
+
+		for (i = left_size; i < left_size + right_size; i++)
+			result[i].nvariant += left_max_nvariant;
+	}
+
+	return result;
+}
+
+/*
+ * Combine left and right lexeme lists into one
+ */
+static TSLexeme *
+TSLexemeUnion(TSLexeme *left, TSLexeme *right)
+{
+	return TSLexemeUnionOpt(left, right, false);
+}
+
+/*
+ * Remove common lexemes and return only which is stored in left list
+ */
+static TSLexeme *
+TSLexemeExcept(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (!found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*
+ * Keep only common lexemes
+ */
+static TSLexeme *
+TSLexemeIntersect(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*-------------------
+ * Result storage functions
+ *-------------------
+ */
+
+/*
+ * Add a lexeme to the result storage
+ */
+static void
+ResultStorageAdd(ResultStorage *storage, ParsedLex *token, TSLexeme *lexs)
+{
+	TSLexeme   *oldLexs = storage->lexemes;
+
+	storage->lexemes = TSLexemeUnionOpt(storage->lexemes, lexs, true);
+	if (oldLexs)
+		pfree(oldLexs);
+}
+
+/*
+ * Move all saved lexemes to accepted list
+ */
+static void
+ResultStorageMoveToAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+	{
+		TSLexeme   *prevAccepted = storage->accepted;
+
+		storage->accepted = TSLexemeUnionOpt(storage->accepted, storage->lexemes, true);
+		if (prevAccepted)
+			pfree(prevAccepted);
+		if (storage->lexemes)
+			pfree(storage->lexemes);
+	}
+	else
+	{
+		storage->accepted = storage->lexemes;
+	}
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all non-accepted lexemes
+ */
+static void
+ResultStorageClearLexemes(ResultStorage *storage)
+{
+	if (storage->lexemes)
+		pfree(storage->lexemes);
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all accepted lexemes
+ */
+static void
+ResultStorageClearAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+		pfree(storage->accepted);
+	storage->accepted = NULL;
+}
+
+/*-------------------
+ * Condition and command execution
+ *-------------------
+ */
+
+/*
+ * Process a token by the dictionary
+ */
+static TSLexeme *
+LexizeExecDictionary(LexizeData *ld, ParsedLex *token, TSMapElement *dictionary)
+{
+	TSLexeme   *res;
+	TSDictionaryCacheEntry *dict;
+	DictSubState subState;
+	Oid			dictId = dictionary->value.objectDictionary;
+
+	if (ld->skipDictionary == dictId)
+		return NULL;
+
+	if (LexemesBufferContains(&ld->buffer, dictionary, token))
+		res = LexemesBufferGet(&ld->buffer, dictionary, token);
+	else
+	{
+		char	   *curValLemm = token->lemm;
+		int			curValLenLemm = token->lenlemm;
+		DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+		dict = lookup_ts_dictionary_cache(dictId);
+
+		if (state)
+		{
+			subState = state->subState;
+			state->processed = true;
+		}
+		else
+		{
+			subState.isend = subState.getnext = false;
+			subState.private_state = NULL;
+		}
+
+		res = (TSLexeme *) DatumGetPointer(FunctionCall4(&(dict->lexize),
+														 PointerGetDatum(dict->dictData),
+														 PointerGetDatum(curValLemm),
+														 Int32GetDatum(curValLenLemm),
+														 PointerGetDatum(&subState)
+														 ));
+
+		if (subState.getnext)
+		{
+			/*
+			 * Dictionary wants next word, so store current context and state
+			 * in the DictStateList
+			 */
+			if (state == NULL)
+			{
+				state = palloc0(sizeof(DictState));
+				state->processed = true;
+				state->relatedDictionary = dictId;
+				state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				state->acceptedTokens.head = state->acceptedTokens.tail = NULL;
+				state->tmpResult = NULL;
+
+				/*
+				 * Add state to the list and update pointer in order to work
+				 * with copy from the list
+				 */
+				state = DictStateListAdd(&ld->dslist, state);
+			}
+
+			state->subState = subState;
+			state->storeToAccepted = res != NULL;
+
+			if (res)
+			{
+				if (state->intermediateTokens.head != NULL)
+				{
+					ParsedLex  *ptr = state->intermediateTokens.head;
+
+					while (ptr)
+					{
+						LPLAddTailCopy(&state->acceptedTokens, ptr);
+						ptr = ptr->next;
+					}
+					state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				}
+
+				if (state->tmpResult)
+					pfree(state->tmpResult);
+				TSLexemeMarkMulti(res);
+				state->tmpResult = res;
+				res = NULL;
+			}
+		}
+		else if (state != NULL)
+		{
+			if (res)
+			{
+				if (state)
+					TSLexemeMarkMulti(res);
+				DictStateListRemove(&ld->dslist, dictId);
+			}
+			else
+			{
+				/*
+				 * Trigger post-processing in order to check tmpResult and
+				 * restart processing (see LexizeExec function)
+				 */
+				state->processed = false;
+			}
+		}
+		LexemesBufferAdd(&ld->buffer, dictionary, token, res);
+	}
+
+	return res;
+}
+
+/*
+ * Check is dictionary waits for more tokens or not
+ */
+static bool
+LexizeExecDictionaryWaitNext(LexizeData *ld, Oid dictId)
+{
+	DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+	if (state)
+		return state->subState.getnext;
+	else
+		return false;
+}
+
+/*
+ * Check is dictionary result for current token is NULL or not.
+ * It dictionary waits for more lexemes, the result is interpreted as not null.
+ */
+static bool
+LexizeExecIsNull(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	bool		result = false;
+
+	if (config->type == TSMAP_EXPRESSION)
+	{
+		TSMapExpression *expression = config->value.objectExpression;
+
+		result = LexizeExecIsNull(ld, token, expression->left) || LexizeExecIsNull(ld, token, expression->right);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		Oid			dictOid = config->value.objectDictionary;
+		TSLexeme   *lexemes = LexizeExecDictionary(ld, token, config);
+
+		if (lexemes)
+			result = false;
+		else
+			result = !LexizeExecDictionaryWaitNext(ld, dictOid);
+	}
+	return result;
+}
+
+/*
+ * Execute a MAP operator
+ */
+static TSLexeme *
+TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression)
+{
+	TSLexeme   *left_res;
+	TSLexeme   *result = NULL;
+	int			left_size;
+	int			i;
+
+	left_res = LexizeExecTSElement(ld, token, expression->left);
+	left_size = TSLexemeGetSize(left_res);
+
+	if (left_res == NULL && LexizeExecIsNull(ld, token, expression->left))
+		result = LexizeExecTSElement(ld, token, expression->right);
+	else if (expression->operator == TSMAP_OP_COMMA &&
+			((left_res != NULL && (left_res->flags & TSL_FILTER) == 0) || left_res == NULL))
+		result = left_res;
+	else
+	{
+		TSMapElement *relatedRuleTmp = NULL;
+		relatedRuleTmp = palloc0(sizeof(TSMapElement));
+		relatedRuleTmp->parent = NULL;
+		relatedRuleTmp->type = TSMAP_EXPRESSION;
+		relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+		relatedRuleTmp->value.objectExpression->operator = expression->operator;
+		relatedRuleTmp->value.objectExpression->left = token->relatedRule;
+
+		for (i = 0; i < left_size; i++)
+		{
+			TSLexeme   *tmp_res = NULL;
+			TSLexeme   *prev_res;
+			ParsedLex	tmp_token;
+
+			tmp_token.lemm = left_res[i].lexeme;
+			tmp_token.lenlemm = strlen(left_res[i].lexeme);
+			tmp_token.type = token->type;
+			tmp_token.next = NULL;
+
+			tmp_res = LexizeExecTSElement(ld, &tmp_token, expression->right);
+			relatedRuleTmp->value.objectExpression->right = tmp_token.relatedRule;
+			prev_res = result;
+			result = TSLexemeUnion(prev_res, tmp_res);
+			if (prev_res)
+				pfree(prev_res);
+		}
+		token->relatedRule = relatedRuleTmp;
+	}
+
+	return result;
+}
+
+/*
+ * Execute a TSMapElement
+ * Common point of all possible types of TSMapElement
+ */
+static TSLexeme *
+LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	TSLexeme   *result = NULL;
+
+	if (LexemesBufferContains(&ld->buffer, config, token))
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexemesBufferGet(&ld->buffer, config, token);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexizeExecDictionary(ld, token, config);
+	}
+	else if (config->type == TSMAP_CASE)
+	{
+		TSMapCase  *caseObject = config->value.objectCase;
+		bool		conditionIsNull = LexizeExecIsNull(ld, token, caseObject->condition);
+
+		if ((!conditionIsNull && caseObject->match) || (conditionIsNull && !caseObject->match))
+		{
+			if (caseObject->command->type == TSMAP_KEEP)
+				result = LexizeExecTSElement(ld, token, caseObject->condition);
+			else
+				result = LexizeExecTSElement(ld, token, caseObject->command);
+		}
+		else if (caseObject->elsebranch)
+			result = LexizeExecTSElement(ld, token, caseObject->elsebranch);
+	}
+	else if (config->type == TSMAP_EXPRESSION)
+	{
+		TSLexeme   *resLeft = NULL;
+		TSLexeme   *resRight = NULL;
+		TSMapElement *relatedRuleTmp = NULL;
+		TSMapExpression *expression = config->value.objectExpression;
+
+		if (expression->operator != TSMAP_OP_MAP && expression->operator != TSMAP_OP_COMMA)
+		{
+			if (ld->debugContext)
+			{
+				relatedRuleTmp = palloc0(sizeof(TSMapElement));
+				relatedRuleTmp->parent = NULL;
+				relatedRuleTmp->type = TSMAP_EXPRESSION;
+				relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+				relatedRuleTmp->value.objectExpression->operator = expression->operator;
+			}
 
-	if (list->head)
-		list->head = list->head->next;
+			resLeft = LexizeExecTSElement(ld, token, expression->left);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->left = token->relatedRule;
 
-	if (list->head == NULL)
-		list->tail = NULL;
+			resRight = LexizeExecTSElement(ld, token, expression->right);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->right = token->relatedRule;
+		}
 
-	return res;
-}
+		switch (expression->operator)
+		{
+			case TSMAP_OP_UNION:
+				result = TSLexemeUnion(resLeft, resRight);
+				break;
+			case TSMAP_OP_EXCEPT:
+				result = TSLexemeExcept(resLeft, resRight);
+				break;
+			case TSMAP_OP_INTERSECT:
+				result = TSLexemeIntersect(resLeft, resRight);
+				break;
+			case TSMAP_OP_MAP:
+			case TSMAP_OP_COMMA:
+				result = TSLexemeMap(ld, token, expression);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains invalid expression operator.")));
+				break;
+		}
 
-static void
-LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
-{
-	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+		if (ld->debugContext && relatedRuleTmp != NULL)
+			token->relatedRule = relatedRuleTmp;
+	}
 
-	newpl->type = type;
-	newpl->lemm = lemm;
-	newpl->lenlemm = lenlemm;
-	LPLAddTail(&ld->towork, newpl);
-	ld->curSub = ld->towork.tail;
+	if (!LexemesBufferContains(&ld->buffer, config, token))
+		LexemesBufferAdd(&ld->buffer, config, token, result);
+
+	return result;
 }
 
-static void
-RemoveHead(LexizeData *ld)
+/*-------------------
+ * LexizeExec and helpers functions
+ *-------------------
+ */
+
+/*
+ * Processing of EOF-like token.
+ * Return all temporary results if any are saved.
+ */
+static TSLexeme *
+LexizeExecFinishProcessing(LexizeData *ld)
 {
-	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+	int			i;
+	TSLexeme   *res = NULL;
+
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		TSLexeme   *last_res = res;
 
-	ld->posDict = 0;
+		res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+		if (last_res)
+			pfree(last_res);
+	}
+
+	return res;
 }
 
-static void
-setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+/*
+ * Get last accepted result of the phrase-dictionary
+ */
+static TSLexeme *
+LexizeExecGetPreviousResults(LexizeData *ld)
 {
-	if (correspondLexem)
-	{
-		*correspondLexem = ld->waste.head;
-	}
-	else
-	{
-		ParsedLex  *tmp,
-				   *ptr = ld->waste.head;
+	int			i;
+	TSLexeme   *res = NULL;
 
-		while (ptr)
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		if (!ld->dslist.states[i].processed)
 		{
-			tmp = ptr->next;
-			pfree(ptr);
-			ptr = tmp;
+			TSLexeme   *last_res = res;
+
+			res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+			if (last_res)
+				pfree(last_res);
 		}
 	}
-	ld->waste.head = ld->waste.tail = NULL;
+
+	return res;
 }
 
+/*
+ * Remove all dictionary states which wasn't used for current token
+ */
 static void
-moveToWaste(LexizeData *ld, ParsedLex *stop)
+LexizeExecClearDictStates(LexizeData *ld)
 {
-	bool		go = true;
+	int			i;
 
-	while (ld->towork.head && go)
+	for (i = 0; i < ld->dslist.listLength; i++)
 	{
-		if (ld->towork.head == stop)
+		if (!ld->dslist.states[i].processed)
 		{
-			ld->curSub = stop->next;
-			go = false;
+			DictStateListRemove(&ld->dslist, ld->dslist.states[i].relatedDictionary);
+			i = 0;
 		}
-		RemoveHead(ld);
 	}
 }
 
-static void
-setNewTmpRes(LexizeData *ld, ParsedLex *lex, TSLexeme *res)
+/*
+ * Check if there are any dictionaries that didn't processed current token
+ */
+static bool
+LexizeExecNotProcessedDictStates(LexizeData *ld)
 {
-	if (ld->tmpRes)
-	{
-		TSLexeme   *ptr;
+	int			i;
 
-		for (ptr = ld->tmpRes; ptr->lexeme; ptr++)
-			pfree(ptr->lexeme);
-		pfree(ld->tmpRes);
-	}
-	ld->tmpRes = res;
-	ld->lastRes = lex;
+	for (i = 0; i < ld->dslist.listLength; i++)
+		if (!ld->dslist.states[i].processed)
+			return true;
+
+	return false;
 }
 
+/*
+ * Do a lexize processing for a towork queue in LexizeData
+ */
 static TSLexeme *
 LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 {
+	ParsedLex  *token;
+	TSMapElement *config;
+	TSLexeme   *res = NULL;
+	TSLexeme   *prevIterationResult = NULL;
+	bool		removeHead = false;
+	bool		resetSkipDictionary = false;
+	bool		accepted = false;
 	int			i;
-	ListDictionary *map;
-	TSDictionaryCacheEntry *dict;
-	TSLexeme   *res;
 
-	if (ld->curDictId == InvalidOid)
+	for (i = 0; i < ld->dslist.listLength; i++)
+		ld->dslist.states[i].processed = false;
+	if (ld->skipDictionary != InvalidOid)
+		resetSkipDictionary = true;
+
+	token = ld->towork.head;
+	if (token == NULL)
 	{
-		/*
-		 * usual mode: dictionary wants only one word, but we should keep in
-		 * mind that we should go through all stack
-		 */
+		setCorrLex(ld, correspondLexem);
+		return NULL;
+	}
 
-		while (ld->towork.head)
+	if (token->type >= ld->cfg->lenmap)
+	{
+		removeHead = true;
+	}
+	else
+	{
+		config = ld->cfg->map[token->type];
+		if (config != NULL)
+		{
+			res = LexizeExecTSElement(ld, token, config);
+			prevIterationResult = LexizeExecGetPreviousResults(ld);
+			removeHead = prevIterationResult == NULL;
+		}
+		else
 		{
-			ParsedLex  *curVal = ld->towork.head;
-			char	   *curValLemm = curVal->lemm;
-			int			curValLenLemm = curVal->lenlemm;
+			removeHead = true;
+			if (token->type == 0)	/* Processing EOF-like token */
+			{
+				res = LexizeExecFinishProcessing(ld);
+				prevIterationResult = NULL;
+			}
+		}
 
-			map = ld->cfg->map + curVal->type;
+		if (LexizeExecNotProcessedDictStates(ld) && (token->type == 0 || config != NULL))	/* Rollback processing */
+		{
+			int			i;
+			ListParsedLex *intermediateTokens = NULL;
+			ListParsedLex *acceptedTokens = NULL;
 
-			if (curVal->type == 0 || curVal->type >= ld->cfg->lenmap || map->len == 0)
+			for (i = 0; i < ld->dslist.listLength; i++)
 			{
-				/* skip this type of lexeme */
-				RemoveHead(ld);
-				continue;
+				if (!ld->dslist.states[i].processed)
+				{
+					intermediateTokens = &ld->dslist.states[i].intermediateTokens;
+					acceptedTokens = &ld->dslist.states[i].acceptedTokens;
+					if (prevIterationResult == NULL)
+						ld->skipDictionary = ld->dslist.states[i].relatedDictionary;
+				}
 			}
 
-			for (i = ld->posDict; i < map->len; i++)
+			if (intermediateTokens && intermediateTokens->head)
 			{
-				dict = lookup_ts_dictionary_cache(map->dictIds[i]);
-
-				ld->dictState.isend = ld->dictState.getnext = false;
-				ld->dictState.private_state = NULL;
-				res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-																 &(dict->lexize),
-																 PointerGetDatum(dict->dictData),
-																 PointerGetDatum(curValLemm),
-																 Int32GetDatum(curValLenLemm),
-																 PointerGetDatum(&ld->dictState)
-																 ));
-
-				if (ld->dictState.getnext)
+				ParsedLex  *head = ld->towork.head;
+
+				ld->towork.head = intermediateTokens->head;
+				intermediateTokens->tail->next = head;
+				head->next = NULL;
+				ld->towork.tail = head;
+				removeHead = false;
+				LPLClear(&ld->waste);
+				if (acceptedTokens && acceptedTokens->head)
 				{
-					/*
-					 * dictionary wants next word, so setup and store current
-					 * position and go to multiword mode
-					 */
-
-					ld->curDictId = DatumGetObjectId(map->dictIds[i]);
-					ld->posDict = i + 1;
-					ld->curSub = curVal->next;
-					if (res)
-						setNewTmpRes(ld, curVal, res);
-					return LexizeExec(ld, correspondLexem);
+					ld->waste.head = acceptedTokens->head;
+					ld->waste.tail = acceptedTokens->tail;
 				}
+			}
+			ResultStorageClearLexemes(&ld->delayedResults);
+			if (config != NULL)
+				res = NULL;
+		}
 
-				if (!res)		/* dictionary doesn't know this lexeme */
-					continue;
+		if (config != NULL)
+			LexizeExecClearDictStates(ld);
+		else if (token->type == 0)
+			DictStateListClear(&ld->dslist);
+	}
 
-				if (res->flags & TSL_FILTER)
-				{
-					curValLemm = res->lexeme;
-					curValLenLemm = strlen(res->lexeme);
-					continue;
-				}
+	if (prevIterationResult)
+		res = prevIterationResult;
+	else
+	{
+		int			i;
 
-				RemoveHead(ld);
-				setCorrLex(ld, correspondLexem);
-				return res;
+		for (i = 0; i < ld->dslist.listLength; i++)
+		{
+			if (ld->dslist.states[i].storeToAccepted)
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].acceptedTokens, token);
+				accepted = true;
+				ld->dslist.states[i].storeToAccepted = false;
+			}
+			else
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].intermediateTokens, token);
 			}
-
-			RemoveHead(ld);
 		}
 	}
-	else
-	{							/* curDictId is valid */
-		dict = lookup_ts_dictionary_cache(ld->curDictId);
 
+	if (removeHead)
+		RemoveHead(ld);
+
+	if (ld->dslist.listLength > 0)
+	{
 		/*
-		 * Dictionary ld->curDictId asks  us about following words
+		 * There is at least one thesaurus dictionary in the middle of
+		 * processing. Delay return of the result to avoid wrong lexemes in
+		 * case of thesaurus phrase rejection.
 		 */
+		ResultStorageAdd(&ld->delayedResults, token, res);
+		if (accepted)
+			ResultStorageMoveToAccepted(&ld->delayedResults);
 
-		while (ld->curSub)
+		/*
+		 * Current value of res should not be cleared, because it is stored in
+		 * LexemesBuffer
+		 */
+		res = NULL;
+	}
+	else
+	{
+		if (ld->towork.head == NULL)
 		{
-			ParsedLex  *curVal = ld->curSub;
-
-			map = ld->cfg->map + curVal->type;
-
-			if (curVal->type != 0)
-			{
-				bool		dictExists = false;
-
-				if (curVal->type >= ld->cfg->lenmap || map->len == 0)
-				{
-					/* skip this type of lexeme */
-					ld->curSub = curVal->next;
-					continue;
-				}
+			TSLexeme   *oldAccepted = ld->delayedResults.accepted;
 
-				/*
-				 * We should be sure that current type of lexeme is recognized
-				 * by our dictionary: we just check is it exist in list of
-				 * dictionaries ?
-				 */
-				for (i = 0; i < map->len && !dictExists; i++)
-					if (ld->curDictId == DatumGetObjectId(map->dictIds[i]))
-						dictExists = true;
-
-				if (!dictExists)
-				{
-					/*
-					 * Dictionary can't work with current tpe of lexeme,
-					 * return to basic mode and redo all stored lexemes
-					 */
-					ld->curDictId = InvalidOid;
-					return LexizeExec(ld, correspondLexem);
-				}
-			}
+			ld->delayedResults.accepted = TSLexemeUnionOpt(ld->delayedResults.accepted, ld->delayedResults.lexemes, true);
+			if (oldAccepted)
+				pfree(oldAccepted);
+		}
 
-			ld->dictState.isend = (curVal->type == 0) ? true : false;
-			ld->dictState.getnext = false;
+		/*
+		 * Add accepted delayed results to the output of the parsing. All
+		 * lexemes returned during thesaurus phrase processing should be
+		 * returned simultaneously, since all phrase tokens are processed as
+		 * one.
+		 */
+		if (ld->delayedResults.accepted != NULL)
+		{
+			/*
+			 * Previous value of res should not be cleared, because it is
+			 * stored in LexemesBuffer
+			 */
+			res = TSLexemeUnionOpt(ld->delayedResults.accepted, res, prevIterationResult == NULL);
 
-			res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-															 &(dict->lexize),
-															 PointerGetDatum(dict->dictData),
-															 PointerGetDatum(curVal->lemm),
-															 Int32GetDatum(curVal->lenlemm),
-															 PointerGetDatum(&ld->dictState)
-															 ));
+			ResultStorageClearLexemes(&ld->delayedResults);
+			ResultStorageClearAccepted(&ld->delayedResults);
+		}
+		setCorrLex(ld, correspondLexem);
+	}
 
-			if (ld->dictState.getnext)
-			{
-				/* Dictionary wants one more */
-				ld->curSub = curVal->next;
-				if (res)
-					setNewTmpRes(ld, curVal, res);
-				continue;
-			}
+	if (resetSkipDictionary)
+		ld->skipDictionary = InvalidOid;
 
-			if (res || ld->tmpRes)
-			{
-				/*
-				 * Dictionary normalizes lexemes, so we remove from stack all
-				 * used lexemes, return to basic mode and redo end of stack
-				 * (if it exists)
-				 */
-				if (res)
-				{
-					moveToWaste(ld, ld->curSub);
-				}
-				else
-				{
-					res = ld->tmpRes;
-					moveToWaste(ld, ld->lastRes);
-				}
+	res = TSLexemeFilterMulti(res);
+	if (res)
+		res = TSLexemeRemoveDuplications(res);
 
-				/* reset to initial state */
-				ld->curDictId = InvalidOid;
-				ld->posDict = 0;
-				ld->lastRes = NULL;
-				ld->tmpRes = NULL;
-				setCorrLex(ld, correspondLexem);
-				return res;
-			}
+	/*
+	 * Copy result since it may be stored in LexemesBuffere and removed at the
+	 * next step.
+	 */
+	if (res)
+	{
+		TSLexeme   *oldRes = res;
+		int			resSize = TSLexemeGetSize(res);
 
-			/*
-			 * Dict don't want next lexem and didn't recognize anything, redo
-			 * from ld->towork.head
-			 */
-			ld->curDictId = InvalidOid;
-			return LexizeExec(ld, correspondLexem);
-		}
+		res = palloc0(sizeof(TSLexeme) * (resSize + 1));
+		memcpy(res, oldRes, sizeof(TSLexeme) * resSize);
 	}
 
-	setCorrLex(ld, correspondLexem);
-	return NULL;
+	LexemesBufferClear(&ld->buffer);
+	return res;
 }
 
+/*-------------------
+ * ts_parse API functions
+ *-------------------
+ */
+
 /*
  * Parse string and lexize words.
  *
@@ -357,7 +1473,7 @@ LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 void
 parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
@@ -375,36 +1491,42 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		while ((norms = LexizeExec(&ldata, NULL)) != NULL)
 		{
-			TSLexeme   *ptr = norms;
+			TSLexeme   *ptr;
+
+			ptr = norms;
 
 			prs->pos++;			/* set pos */
 
@@ -429,14 +1551,246 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 			}
 			pfree(norms);
 		}
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
 
+/*-------------------
+ * ts_debug and helper functions
+ *-------------------
+ */
+
+/*
+ * Free memory occupied by temporary TSMapElement
+ */
+
+static void
+ts_debug_free_rule(TSMapElement *element)
+{
+	if (element != NULL && element->type == TSMAP_EXPRESSION)
+	{
+		ts_debug_free_rule(element->value.objectExpression->left);
+		ts_debug_free_rule(element->value.objectExpression->right);
+		pfree(element->value.objectExpression);
+		pfree(element);
+	}
+}
+
+/*
+ * Initialize SRF context and text parser for ts_debug execution.
+ */
+static void
+ts_debug_init(Oid cfgId, text *inputText, FunctionCallInfo fcinfo)
+{
+	TupleDesc	tupdesc;
+	char	   *buf;
+	int			buflen;
+	FuncCallContext *funcctx;
+	MemoryContext oldcontext;
+	TSDebugContext *context;
+
+	funcctx = SRF_FIRSTCALL_INIT();
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+	buf = text_to_cstring(inputText);
+	buflen = strlen(buf);
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("function returning record called in context "
+						"that cannot accept type record")));
+
+	funcctx->user_fctx = palloc0(sizeof(TSDebugContext));
+	funcctx->attinmeta = TupleDescGetAttInMetadata(tupdesc);
+
+	context = funcctx->user_fctx;
+	context->cfg = lookup_ts_config_cache(cfgId);
+	context->prsobj = lookup_ts_parser_cache(context->cfg->prsId);
+
+	context->tokenTypes = (LexDescr *) DatumGetPointer(OidFunctionCall1(context->prsobj->lextypeOid,
+																		(Datum) 0));
+
+	context->prsdata = (void *) DatumGetPointer(FunctionCall2(&context->prsobj->prsstart,
+															  PointerGetDatum(buf),
+															  Int32GetDatum(buflen)));
+	LexizeInit(&context->ldata, context->cfg);
+	context->ldata.debugContext = true;
+	context->tokentype = 1;
+
+	MemoryContextSwitchTo(oldcontext);
+}
+
+/*
+ * Get one token from input text and add it to processing queue.
+ */
+static void
+ts_debug_get_token(FuncCallContext *funcctx)
+{
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+	int			lenlemm;
+	char	   *lemm = NULL;
+
+	context = funcctx->user_fctx;
+
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+	context->tokentype = DatumGetInt32(FunctionCall3(&(context->prsobj->prstoken),
+													 PointerGetDatum(context->prsdata),
+													 PointerGetDatum(&lemm),
+													 PointerGetDatum(&lenlemm)));
+
+	if (context->tokentype > 0 && lenlemm >= MAXSTRLEN)
+	{
+#ifdef IGNORE_LONGLEXEME
+		ereport(NOTICE,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#else
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#endif
+	}
+
+	LexizeAddLemm(&context->ldata, context->tokentype, lemm, lenlemm);
+	MemoryContextSwitchTo(oldcontext);
+}
+
 /*
+ * Parse text and print debug information, such as token type, dictionary map
+ * configuration, selected command and lexemes for each token.
+ * Arguments: regconfiguration(Oid) cfgId, text *inputText
+ */
+Datum
+ts_debug(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		Oid			cfgId = PG_GETARG_OID(0);
+		text	   *inputText = PG_GETARG_TEXT_P(1);
+
+		ts_debug_init(cfgId, inputText, fcinfo);
+	}
+
+	funcctx = SRF_PERCALL_SETUP();
+	context = funcctx->user_fctx;
+
+	while (context->tokentype > 0 && context->leftTokens == NULL)
+	{
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+		ts_debug_get_token(funcctx);
+
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	while (context->leftTokens == NULL && context->ldata.towork.head != NULL)
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+	if (context->leftTokens && context->leftTokens && context->leftTokens->type > 0)
+	{
+		HeapTuple	tuple;
+		Datum		result;
+		char	  **values;
+		ParsedLex  *lex = context->leftTokens;
+		StringInfo	str = NULL;
+		TSLexeme   *ptr;
+
+		values = palloc0(sizeof(char *) * 7);
+		str = makeStringInfo();
+		initStringInfo(str);
+
+		values[0] = context->tokenTypes[lex->type - 1].alias;
+		values[1] = context->tokenTypes[lex->type - 1].descr;
+
+		values[2] = palloc0(sizeof(char) * (lex->lenlemm + 1));
+		memcpy(values[2], lex->lemm, sizeof(char) * lex->lenlemm);
+
+		initStringInfo(str);
+		appendStringInfoChar(str, '{');
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			Oid *dictionaries = TSMapGetDictionaries(context->ldata.cfg->map[lex->type]);
+			Oid *currentDictionary = NULL;
+			for (currentDictionary = dictionaries; *currentDictionary != InvalidOid; currentDictionary++)
+			{
+				if (currentDictionary != dictionaries)
+					appendStringInfoChar(str, ',');
+
+				TSMapPrintDictName(*currentDictionary, str);
+			}
+		}
+		appendStringInfoChar(str, '}');
+		values[3] = str->data;
+
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			initStringInfo(str);
+			TSMapPrintElement(context->ldata.cfg->map[lex->type], str);
+			values[4] = str->data;
+
+			initStringInfo(str);
+			if (lex->relatedRule)
+			{
+				TSMapPrintElement(lex->relatedRule, str);
+				values[5] = str->data;
+				str = makeStringInfo();
+				initStringInfo(str);
+				ts_debug_free_rule(lex->relatedRule);
+				lex->relatedRule = NULL;
+			}
+		}
+
+		initStringInfo(str);
+		ptr = context->savedLexemes;
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '{');
+
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr != context->savedLexemes)
+				appendStringInfoString(str, ", ");
+			appendStringInfoString(str, ptr->lexeme);
+			ptr++;
+		}
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '}');
+		if (context->savedLexemes)
+			values[6] = str->data;
+		else
+			values[6] = NULL;
+
+		tuple = BuildTupleFromCStrings(funcctx->attinmeta, values);
+		result = HeapTupleGetDatum(tuple);
+
+		context->leftTokens = lex->next;
+		pfree(lex);
+		if (context->leftTokens == NULL && context->savedLexemes)
+			pfree(context->savedLexemes);
+
+		SRF_RETURN_NEXT(funcctx, result);
+	}
+
+	FunctionCall1(&(context->prsobj->prsend), PointerGetDatum(context->prsdata));
+	SRF_RETURN_DONE(funcctx);
+}
+
+/*-------------------
  * Headline framework
+ *-------------------
  */
+
 static void
 hladdword(HeadlineParsedText *prs, char *buf, int buflen, int type)
 {
@@ -532,12 +1886,12 @@ addHLParsedLex(HeadlineParsedText *prs, TSQuery query, ParsedLex *lexs, TSLexeme
 void
 hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
 	TSLexeme   *norms;
-	ParsedLex  *lexs;
+	ParsedLex  *lexs = NULL;
 	TSConfigCacheEntry *cfg;
 	TSParserCacheEntry *prsobj;
 	void	   *prsdata;
@@ -551,32 +1905,36 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		do
 		{
@@ -587,9 +1945,10 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 			}
 			else
 				addHLParsedLex(prs, query, lexs, NULL);
+			lexs = NULL;
 		} while (norms);
 
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
@@ -642,14 +2001,14 @@ generateHeadline(HeadlineParsedText *prs)
 			}
 			else if (!wrd->skip)
 			{
-				if (wrd->selected)
+				if (wrd->selected && (wrd == prs->words || !(wrd - 1)->selected))
 				{
 					memcpy(ptr, prs->startsel, prs->startsellen);
 					ptr += prs->startsellen;
 				}
 				memcpy(ptr, wrd->word, wrd->len);
 				ptr += wrd->len;
-				if (wrd->selected)
+				if (wrd->selected && ((wrd + 1 - prs->words) == prs->curwords || !(wrd + 1)->selected))
 				{
 					memcpy(ptr, prs->stopsel, prs->stopsellen);
 					ptr += prs->stopsellen;
diff --git a/src/backend/tsearch/ts_utils.c b/src/backend/tsearch/ts_utils.c
index f6e03ae..0dd846b 100644
--- a/src/backend/tsearch/ts_utils.c
+++ b/src/backend/tsearch/ts_utils.c
@@ -20,7 +20,6 @@
 #include "tsearch/ts_locale.h"
 #include "tsearch/ts_utils.h"
 
-
 /*
  * Given the base name and extension of a tsearch config file, return
  * its full path name.  The base name is assumed to be user-supplied,
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 2b38178..f251e83 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -828,11 +828,10 @@ static const struct cachedesc cacheinfo[] = {
 	},
 	{TSConfigMapRelationId,		/* TSCONFIGMAP */
 		TSConfigMapIndexId,
-		3,
+		2,
 		{
 			Anum_pg_ts_config_map_mapcfg,
 			Anum_pg_ts_config_map_maptokentype,
-			Anum_pg_ts_config_map_mapseqno,
 			0
 		},
 		2
diff --git a/src/backend/utils/cache/ts_cache.c b/src/backend/utils/cache/ts_cache.c
index 3d5c194..1ec3834 100644
--- a/src/backend/utils/cache/ts_cache.c
+++ b/src/backend/utils/cache/ts_cache.c
@@ -39,6 +39,7 @@
 #include "catalog/pg_ts_template.h"
 #include "commands/defrem.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/catcache.h"
 #include "utils/fmgroids.h"
@@ -51,13 +52,12 @@
 
 
 /*
- * MAXTOKENTYPE/MAXDICTSPERTT are arbitrary limits on the workspace size
+ * MAXTOKENTYPE is arbitrary limits on the workspace size
  * used in lookup_ts_config_cache().  We could avoid hardwiring a limit
  * by making the workspace dynamically enlargeable, but it seems unlikely
  * to be worth the trouble.
  */
-#define MAXTOKENTYPE	256
-#define MAXDICTSPERTT	100
+#define MAXTOKENTYPE		256
 
 
 static HTAB *TSParserCacheHash = NULL;
@@ -415,11 +415,10 @@ lookup_ts_config_cache(Oid cfgId)
 		ScanKeyData mapskey;
 		SysScanDesc mapscan;
 		HeapTuple	maptup;
-		ListDictionary maplists[MAXTOKENTYPE + 1];
-		Oid			mapdicts[MAXDICTSPERTT];
+		TSMapElement *mapconfigs[MAXTOKENTYPE + 1];
 		int			maxtokentype;
-		int			ndicts;
 		int			i;
+		TSMapElement *tmpConfig;
 
 		tp = SearchSysCache1(TSCONFIGOID, ObjectIdGetDatum(cfgId));
 		if (!HeapTupleIsValid(tp))
@@ -450,8 +449,8 @@ lookup_ts_config_cache(Oid cfgId)
 			if (entry->map)
 			{
 				for (i = 0; i < entry->lenmap; i++)
-					if (entry->map[i].dictIds)
-						pfree(entry->map[i].dictIds);
+					if (entry->map[i])
+						TSMapElementFree(entry->map[i]);
 				pfree(entry->map);
 			}
 		}
@@ -465,13 +464,11 @@ lookup_ts_config_cache(Oid cfgId)
 		/*
 		 * Scan pg_ts_config_map to gather dictionary list for each token type
 		 *
-		 * Because the index is on (mapcfg, maptokentype, mapseqno), we will
-		 * see the entries in maptokentype order, and in mapseqno order for
-		 * each token type, even though we didn't explicitly ask for that.
+		 * Because the index is on (mapcfg, maptokentype), we will see the
+		 * entries in maptokentype order even though we didn't explicitly ask
+		 * for that.
 		 */
-		MemSet(maplists, 0, sizeof(maplists));
 		maxtokentype = 0;
-		ndicts = 0;
 
 		ScanKeyInit(&mapskey,
 					Anum_pg_ts_config_map_mapcfg,
@@ -483,6 +480,7 @@ lookup_ts_config_cache(Oid cfgId)
 		mapscan = systable_beginscan_ordered(maprel, mapidx,
 											 NULL, 1, &mapskey);
 
+		memset(mapconfigs, 0, sizeof(mapconfigs));
 		while ((maptup = systable_getnext_ordered(mapscan, ForwardScanDirection)) != NULL)
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
@@ -492,51 +490,27 @@ lookup_ts_config_cache(Oid cfgId)
 				elog(ERROR, "maptokentype value %d is out of range", toktype);
 			if (toktype < maxtokentype)
 				elog(ERROR, "maptokentype entries are out of order");
-			if (toktype > maxtokentype)
-			{
-				/* starting a new token type, but first save the prior data */
-				if (ndicts > 0)
-				{
-					maplists[maxtokentype].len = ndicts;
-					maplists[maxtokentype].dictIds = (Oid *)
-						MemoryContextAlloc(CacheMemoryContext,
-										   sizeof(Oid) * ndicts);
-					memcpy(maplists[maxtokentype].dictIds, mapdicts,
-						   sizeof(Oid) * ndicts);
-				}
-				maxtokentype = toktype;
-				mapdicts[0] = cfgmap->mapdict;
-				ndicts = 1;
-			}
-			else
-			{
-				/* continuing data for current token type */
-				if (ndicts >= MAXDICTSPERTT)
-					elog(ERROR, "too many pg_ts_config_map entries for one token type");
-				mapdicts[ndicts++] = cfgmap->mapdict;
-			}
+
+			maxtokentype = toktype;
+			tmpConfig = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			mapconfigs[maxtokentype] = TSMapMoveToMemoryContext(tmpConfig, CacheMemoryContext);
+			TSMapElementFree(tmpConfig);
+			tmpConfig = NULL;
 		}
 
 		systable_endscan_ordered(mapscan);
 		index_close(mapidx, AccessShareLock);
 		heap_close(maprel, AccessShareLock);
 
-		if (ndicts > 0)
+		if (maxtokentype > 0)
 		{
-			/* save the last token type's dictionaries */
-			maplists[maxtokentype].len = ndicts;
-			maplists[maxtokentype].dictIds = (Oid *)
-				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(Oid) * ndicts);
-			memcpy(maplists[maxtokentype].dictIds, mapdicts,
-				   sizeof(Oid) * ndicts);
-			/* and save the overall map */
+			/* save the overall map */
 			entry->lenmap = maxtokentype + 1;
-			entry->map = (ListDictionary *)
+			entry->map = (TSMapElement * *)
 				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(ListDictionary) * entry->lenmap);
-			memcpy(entry->map, maplists,
-				   sizeof(ListDictionary) * entry->lenmap);
+								   sizeof(TSMapElement *) * entry->lenmap);
+			memcpy(entry->map, mapconfigs,
+				   sizeof(TSMapElement *) * entry->lenmap);
 		}
 
 		entry->isvalid = true;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 566cbf2..45c6e18 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -14209,15 +14209,29 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 	PQclear(res);
 
 	resetPQExpBuffer(query);
-	appendPQExpBuffer(query,
-					  "SELECT\n"
-					  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
-					  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
-					  "FROM pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE m.mapcfg = '%u'\n"
-					  "ORDER BY m.mapcfg, m.maptokentype, m.mapseqno",
-					  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+
+	if (fout->remoteVersion >= 110000)
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+	else
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, m.mapseqno\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
 	ntups = PQntuples(res);
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 0c3be1f..729242e 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -4646,25 +4646,41 @@ describeOneTSConfig(const char *oid, const char *nspname, const char *cfgname,
 
 	initPQExpBuffer(&buf);
 
-	printfPQExpBuffer(&buf,
-					  "SELECT\n"
-					  "  ( SELECT t.alias FROM\n"
-					  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
-					  "  pg_catalog.btrim(\n"
-					  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
-					  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
-					  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
-					  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
-					  "    ) :: pg_catalog.text,\n"
-					  "  '{}') AS \"%s\"\n"
-					  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
-					  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
-					  "ORDER BY 1;",
-					  gettext_noop("Token"),
-					  gettext_noop("Dictionaries"),
-					  oid);
+	if (pset.sversion >= 110000)
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  " dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+	else
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  "  pg_catalog.btrim(\n"
+						  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
+						  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
+						  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
+						  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
+						  "    ) :: pg_catalog.text,\n"
+						  "  '{}') AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+
 
 	res = PSQLexec(buf.data);
 	termPQExpBuffer(&buf);
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index 0bb8754..1dd4938 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -260,7 +260,7 @@ DECLARE_UNIQUE_INDEX(pg_ts_config_cfgname_index, 3608, on pg_ts_config using btr
 DECLARE_UNIQUE_INDEX(pg_ts_config_oid_index, 3712, on pg_ts_config using btree(oid oid_ops));
 #define TSConfigOidIndexId	3712
 
-DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops, mapseqno int4_ops));
+DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops));
 #define TSConfigMapIndexId	3609
 
 DECLARE_UNIQUE_INDEX(pg_ts_dict_dictname_index, 3604, on pg_ts_dict using btree(dictname name_ops, dictnamespace oid_ops));
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 0fdb42f..d8cf9dc 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -4965,6 +4965,12 @@ DESCR("transform jsonb to tsvector");
 DATA(insert OID = 4212 (  to_tsvector		PGNSP PGUID 12 100 0 0 0 f f f t f i s 2 0 3614 "3734 114" _null_ _null_ _null_ _null_ _null_ json_to_tsvector_byid _null_ _null_ _null_ ));
 DESCR("transform json to tsvector");
 
+DATA(insert OID = 8891 (  dictionary_mapping_to_text	PGNSP PGUID 12 100 0 0 0 f f f t f s s 2 0 25 "26 23" _null_ _null_ _null_ _null_ _null_ dictionary_mapping_to_text _null_ _null_ _null_ ));
+DESCR("returns text representation of dictionary configuration map");
+
+DATA(insert OID = 8892 (  ts_debug			PGNSP PGUID 12 100 1 0 0 f f f t t s s 2 0 2249 "3734 25" "{3734,25,25,25,25,3770,25,25,1009}" "{i,i,o,o,o,o,o,o,o}" "{cfgId,inputText,alias,description,token,dictionaries,configuration,command,lexemes}" _null_ _null_ ts_debug _null_ _null_ _null_));
+DESCR("debug function for text search configuration");
+
 DATA(insert OID = 3752 (  tsvector_update_trigger			PGNSP PGUID 12 1 0 0 0 f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_byid _null_ _null_ _null_ ));
 DESCR("trigger for automatic update of tsvector column");
 DATA(insert OID = 3753 (  tsvector_update_trigger_column	PGNSP PGUID 12 1 0 0 0 f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_bycolumn _null_ _null_ _null_ ));
diff --git a/src/include/catalog/pg_ts_config_map.h b/src/include/catalog/pg_ts_config_map.h
index a3d9e3f..6bcd44a 100644
--- a/src/include/catalog/pg_ts_config_map.h
+++ b/src/include/catalog/pg_ts_config_map.h
@@ -22,6 +22,7 @@
 #define PG_TS_CONFIG_MAP_H
 
 #include "catalog/genbki.h"
+#include "utils/jsonb.h"
 
 /* ----------------
  *		pg_ts_config_map definition.  cpp turns this into
@@ -30,49 +31,109 @@
  */
 #define TSConfigMapRelationId	3603
 
+/*
+ * Create a typedef in order to use same type name in
+ * generated DB initialization script and C source code
+ */
+typedef Jsonb jsonb;
+
 CATALOG(pg_ts_config_map,3603) BKI_WITHOUT_OIDS
 {
 	Oid			mapcfg;			/* OID of configuration owning this entry */
 	int32		maptokentype;	/* token type from parser */
-	int32		mapseqno;		/* order in which to consult dictionaries */
-	Oid			mapdict;		/* dictionary to consult */
+	jsonb		mapdicts;		/* dictionary map Jsonb representation */
 } FormData_pg_ts_config_map;
 
 typedef FormData_pg_ts_config_map *Form_pg_ts_config_map;
 
+/*
+ * Element of the mapping expression tree
+ */
+typedef struct TSMapElement
+{
+	int			type; /* Type of the element */
+	union
+	{
+		struct TSMapExpression *objectExpression;
+		struct TSMapCase *objectCase;
+		Oid			objectDictionary;
+		void	   *object;
+	} value;
+	struct TSMapElement *parent; /* Parent in the expression tree */
+} TSMapElement;
+
+/*
+ * Representation of expression with operator and two operands
+ */
+typedef struct TSMapExpression
+{
+	int			operator;
+	TSMapElement *left;
+	TSMapElement *right;
+} TSMapExpression;
+
+/*
+ * Representation of CASE structure inside database
+ */
+typedef struct TSMapCase
+{
+	TSMapElement *condition;
+	TSMapElement *command;
+	TSMapElement *elsebranch;
+	bool		match;	/* If false, NO MATCH is used */
+} TSMapCase;
+
 /* ----------------
- *		compiler constants for pg_ts_config_map
+ *		Compiler constants for pg_ts_config_map
  * ----------------
  */
-#define Natts_pg_ts_config_map				4
+#define Natts_pg_ts_config_map				3
 #define Anum_pg_ts_config_map_mapcfg		1
 #define Anum_pg_ts_config_map_maptokentype	2
-#define Anum_pg_ts_config_map_mapseqno		3
-#define Anum_pg_ts_config_map_mapdict		4
+#define Anum_pg_ts_config_map_mapdicts		3
+
+/* ----------------
+ *		Dictionary map operators
+ * ----------------
+ */
+#define TSMAP_OP_MAP			1
+#define TSMAP_OP_UNION			2
+#define TSMAP_OP_EXCEPT			3
+#define TSMAP_OP_INTERSECT		4
+#define TSMAP_OP_COMMA			5
+
+/* ----------------
+ *		TSMapElement object types
+ * ----------------
+ */
+#define TSMAP_EXPRESSION	1
+#define TSMAP_CASE			2
+#define TSMAP_DICTIONARY	3
+#define TSMAP_KEEP			4
 
 /* ----------------
  *		initial contents of pg_ts_config_map
  * ----------------
  */
 
-DATA(insert ( 3748	1	1	3765 ));
-DATA(insert ( 3748	2	1	3765 ));
-DATA(insert ( 3748	3	1	3765 ));
-DATA(insert ( 3748	4	1	3765 ));
-DATA(insert ( 3748	5	1	3765 ));
-DATA(insert ( 3748	6	1	3765 ));
-DATA(insert ( 3748	7	1	3765 ));
-DATA(insert ( 3748	8	1	3765 ));
-DATA(insert ( 3748	9	1	3765 ));
-DATA(insert ( 3748	10	1	3765 ));
-DATA(insert ( 3748	11	1	3765 ));
-DATA(insert ( 3748	15	1	3765 ));
-DATA(insert ( 3748	16	1	3765 ));
-DATA(insert ( 3748	17	1	3765 ));
-DATA(insert ( 3748	18	1	3765 ));
-DATA(insert ( 3748	19	1	3765 ));
-DATA(insert ( 3748	20	1	3765 ));
-DATA(insert ( 3748	21	1	3765 ));
-DATA(insert ( 3748	22	1	3765 ));
+DATA(insert ( 3748	1	"[3765]" ));
+DATA(insert ( 3748	2	"[3765]" ));
+DATA(insert ( 3748	3	"[3765]" ));
+DATA(insert ( 3748	4	"[3765]" ));
+DATA(insert ( 3748	5	"[3765]" ));
+DATA(insert ( 3748	6	"[3765]" ));
+DATA(insert ( 3748	7	"[3765]" ));
+DATA(insert ( 3748	8	"[3765]" ));
+DATA(insert ( 3748	9	"[3765]" ));
+DATA(insert ( 3748	10	"[3765]" ));
+DATA(insert ( 3748	11	"[3765]" ));
+DATA(insert ( 3748	15	"[3765]" ));
+DATA(insert ( 3748	16	"[3765]" ));
+DATA(insert ( 3748	17	"[3765]" ));
+DATA(insert ( 3748	18	"[3765]" ));
+DATA(insert ( 3748	19	"[3765]" ));
+DATA(insert ( 3748	20	"[3765]" ));
+DATA(insert ( 3748	21	"[3765]" ));
+DATA(insert ( 3748	22	"[3765]" ));
 
 #endif							/* PG_TS_CONFIG_MAP_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 74b094a..23eef6a 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -381,6 +381,9 @@ typedef enum NodeTag
 	T_CreateEnumStmt,
 	T_CreateRangeStmt,
 	T_AlterEnumStmt,
+	T_DictMapExprElem,
+	T_DictMapElem,
+	T_DictMapCase,
 	T_AlterTSDictionaryStmt,
 	T_AlterTSConfigurationStmt,
 	T_CreateFdwStmt,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index f668cba..2c5c406 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3403,6 +3403,50 @@ typedef enum AlterTSConfigType
 	ALTER_TSCONFIG_DROP_MAPPING
 } AlterTSConfigType;
 
+/*
+ * TS Configuration expression tree element's types
+ */
+typedef enum DictMapElemType
+{
+	DICT_MAP_CASE,
+	DICT_MAP_EXPRESSION,
+	DICT_MAP_KEEP,
+	DICT_MAP_DICTIONARY
+} DictMapElemType;
+
+/*
+ * TS Configuration expression tree abstract element
+ */
+typedef struct DictMapElem
+{
+	NodeTag		type;
+	int8		kind;			/* See DictMapElemType */
+	void	   *data;			/* Type should be detected by kind value */
+} DictMapElem;
+
+/*
+ * TS Configuration expression tree element with operator and operands
+ */
+typedef struct DictMapExprElem
+{
+	NodeTag		type;
+	DictMapElem *left;
+	DictMapElem *right;
+	int8		oper;
+} DictMapExprElem;
+
+/*
+ * TS Configuration expression tree CASE element
+ */
+typedef struct DictMapCase
+{
+	NodeTag		type;
+	struct DictMapElem *condition;
+	struct DictMapElem *command;
+	struct DictMapElem *elsebranch;
+	bool		match;
+} DictMapCase;
+
 typedef struct AlterTSConfigurationStmt
 {
 	NodeTag		type;
@@ -3415,6 +3459,7 @@ typedef struct AlterTSConfigurationStmt
 	 */
 	List	   *tokentype;		/* list of Value strings */
 	List	   *dicts;			/* list of list of Value strings */
+	DictMapElem *dict_map;		/* tree of the mapping expression */
 	bool		override;		/* if true - remove old variant */
 	bool		replace;		/* if true - replace dictionary by another */
 	bool		missing_ok;		/* for DROP - skip error if missing? */
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index cf32197..13fb23f 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -220,6 +220,7 @@ PG_KEYWORD("is", IS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isnull", ISNULL, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isolation", ISOLATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("join", JOIN, TYPE_FUNC_NAME_KEYWORD)
+PG_KEYWORD("keep", KEEP, RESERVED_KEYWORD)
 PG_KEYWORD("key", KEY, UNRESERVED_KEYWORD)
 PG_KEYWORD("label", LABEL, UNRESERVED_KEYWORD)
 PG_KEYWORD("language", LANGUAGE, UNRESERVED_KEYWORD)
@@ -242,6 +243,7 @@ PG_KEYWORD("location", LOCATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("lock", LOCK_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("locked", LOCKED, UNRESERVED_KEYWORD)
 PG_KEYWORD("logged", LOGGED, UNRESERVED_KEYWORD)
+PG_KEYWORD("map", MAP, UNRESERVED_KEYWORD)
 PG_KEYWORD("mapping", MAPPING, UNRESERVED_KEYWORD)
 PG_KEYWORD("match", MATCH, UNRESERVED_KEYWORD)
 PG_KEYWORD("materialized", MATERIALIZED, UNRESERVED_KEYWORD)
diff --git a/src/include/tsearch/ts_cache.h b/src/include/tsearch/ts_cache.h
index 410f1d5..4633dd7 100644
--- a/src/include/tsearch/ts_cache.h
+++ b/src/include/tsearch/ts_cache.h
@@ -14,6 +14,7 @@
 #define TS_CACHE_H
 
 #include "utils/guc.h"
+#include "catalog/pg_ts_config_map.h"
 
 
 /*
@@ -66,6 +67,7 @@ typedef struct
 {
 	int			len;
 	Oid		   *dictIds;
+	int32	   *dictOptions;
 } ListDictionary;
 
 typedef struct
@@ -77,7 +79,7 @@ typedef struct
 	Oid			prsId;
 
 	int			lenmap;
-	ListDictionary *map;
+	TSMapElement **map;
 } TSConfigCacheEntry;
 
 
diff --git a/src/include/tsearch/ts_configmap.h b/src/include/tsearch/ts_configmap.h
new file mode 100644
index 0000000..79e6180
--- /dev/null
+++ b/src/include/tsearch/ts_configmap.h
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.h
+ *	  internal representation of text search configuration and utilities for it
+ *
+ * Copyright (c) 1998-2018, PostgreSQL Global Development Group
+ *
+ * src/include/tsearch/ts_utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PG_TS_CONFIGMAP_H_
+#define _PG_TS_CONFIGMAP_H_
+
+#include "utils/jsonb.h"
+#include "catalog/pg_ts_config_map.h"
+
+/*
+ * Configuration storage functions
+ * Provide interface to convert ts_configuration into JSONB and vice versa
+ */
+
+/* Convert TSMapElement structure into JSONB */
+extern Jsonb *TSMapToJsonb(TSMapElement *config);
+
+/* Extract TSMapElement from JSONB formated data */
+extern TSMapElement *JsonbToTSMap(Jsonb *json);
+/* Replace all occurances of oldDict by newDict */
+extern void TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict);
+
+/* Move rule list into specified memory context */
+extern TSMapElement *TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context);
+/* Free all nodes of the rule list */
+extern void TSMapElementFree(TSMapElement *element);
+
+/* Print map in human-readable format */
+extern void TSMapPrintElement(TSMapElement *config, StringInfo result);
+
+/* Print dictionary name for a given Oid */
+extern void TSMapPrintDictName(Oid dictId, StringInfo result);
+
+/* Return all dictionaries used in config */
+extern Oid *TSMapGetDictionaries(TSMapElement *config);
+
+/* Do a deep comparison of two TSMapElements. Doesn't check parents of elements */
+extern bool TSMapElementEquals(TSMapElement *a, TSMapElement *b);
+
+#endif							/* _PG_TS_CONFIGMAP_H_ */
diff --git a/src/include/tsearch/ts_public.h b/src/include/tsearch/ts_public.h
index 0b7a5aa..d970eec 100644
--- a/src/include/tsearch/ts_public.h
+++ b/src/include/tsearch/ts_public.h
@@ -115,6 +115,7 @@ typedef struct
 #define TSL_ADDPOS		0x01
 #define TSL_PREFIX		0x02
 #define TSL_FILTER		0x04
+#define TSL_MULTI		0x08
 
 /*
  * Struct for supporting complex dictionaries like thesaurus.
diff --git a/src/test/regress/expected/oidjoins.out b/src/test/regress/expected/oidjoins.out
index 234b44f..40029f3 100644
--- a/src/test/regress/expected/oidjoins.out
+++ b/src/test/regress/expected/oidjoins.out
@@ -1081,14 +1081,6 @@ WHERE	mapcfg != 0 AND
 ------+--------
 (0 rows)
 
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
- ctid | mapdict 
-------+---------
-(0 rows)
-
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/expected/tsdicts.out b/src/test/regress/expected/tsdicts.out
index 0c1d7c7..04ac38b 100644
--- a/src/test/regress/expected/tsdicts.out
+++ b/src/test/regress/expected/tsdicts.out
@@ -420,6 +420,105 @@ SELECT ts_lexize('thesaurus', 'one');
  {1}
 (1 row)
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+SELECT to_tsvector('english_union', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'books');
+    to_tsvector     
+--------------------
+ 'book':1 'books':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+SELECT to_tsvector('english_intersect', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'books');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'booking');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+SELECT to_tsvector('english_except', 'book');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_except', 'books');
+ to_tsvector 
+-------------
+ 'books':1
+(1 row)
+
+SELECT to_tsvector('english_except', 'booking');
+ to_tsvector 
+-------------
+ 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+SELECT to_tsvector('english_branches', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -580,6 +679,153 @@ SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a
  'card':3,10 'invit':2,9 'like':6 'look':5 'order':1,8
 (1 row)
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                                     to_tsvector                                      
+--------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+              to_tsvector              
+---------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                     to_tsvector                     
+-----------------------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5 'supernova':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                              to_tsvector                               
+------------------------------------------------------------------------
+ '1987a':6 'mysterious':2 'of':4 'rings':3 'sn':5 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+            Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |                     Dictionaries                      
+-----------------+-------------------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN MATCH THEN simple UNION thesaurus+
+                 | ELSE simple                                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+      to_tsvector       
+------------------------
+ '12':1 'one':1 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+            to_tsvector            
+-----------------------------------
+ '123':1 'one':1 'three':3 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+           to_tsvector           
+---------------------------------
+ '12':1 'four':3 'one':1 'two':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+      Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |               Dictionaries               
+-----------------+------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN NO MATCH THEN simple+
+                 | ELSE thesaurus                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector    
+------------------
+ '12':1 'books':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector   
+-----------------
+ '12':1 'book':2
+(1 row)
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/expected/tsearch.out b/src/test/regress/expected/tsearch.out
index d63fb12..c0e9fc5 100644
--- a/src/test/regress/expected/tsearch.out
+++ b/src/test/regress/expected/tsearch.out
@@ -36,11 +36,11 @@ WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 -----+---------
 (0 rows)
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
- mapcfg | maptokentype | mapseqno 
---------+--------------+----------
+WHERE mapcfg = 0;
+ mapcfg | maptokentype 
+--------+--------------
 (0 rows)
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
@@ -51,8 +51,8 @@ RIGHT JOIN pg_ts_config_map AS m
     ON (tt.cfgid=m.mapcfg AND tt.tokid=m.maptokentype)
 WHERE
     tt.cfgid IS NULL OR tt.tokid IS NULL;
- cfgid | tokid | mapcfg | maptokentype | mapseqno | mapdict 
--------+-------+--------+--------------+----------+---------
+ cfgid | tokid | mapcfg | maptokentype | mapdicts 
+-------+-------+--------+--------------+----------
 (0 rows)
 
 -- test basic text search behavior without indexes, then with
@@ -567,55 +567,55 @@ SELECT length(to_tsvector('english', '345 qwe@efd.r '' http://www.com/ http://ae
 
 -- ts_debug
 SELECT * from ts_debug('english', '<myns:foo-bar_baz.blurfl>abc&nm1;def&#xa9;ghi&#245;jkl</myns:foo-bar_baz.blurfl>');
-   alias   |   description   |           token            |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+----------------------------+----------------+--------------+---------
- tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |              | 
- asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem | {abc}
- entity    | XML entity      | &nm1;                      | {}             |              | 
- asciiword | Word, all ASCII | def                        | {english_stem} | english_stem | {def}
- entity    | XML entity      | &#xa9;                     | {}             |              | 
- asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem | {ghi}
- entity    | XML entity      | &#245;                     | {}             |              | 
- asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem | {jkl}
- tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |              | 
+   alias   |   description   |           token            |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+----------------------------+----------------+---------------+--------------+---------
+ tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |               |              | 
+ asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem  | english_stem | {abc}
+ entity    | XML entity      | &nm1;                      | {}             |               |              | 
+ asciiword | Word, all ASCII | def                        | {english_stem} | english_stem  | english_stem | {def}
+ entity    | XML entity      | &#xa9;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem  | english_stem | {ghi}
+ entity    | XML entity      | &#245;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem  | english_stem | {jkl}
+ tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |               |              | 
 (9 rows)
 
 -- check parsing of URLs
 SELECT * from ts_debug('english', 'http://www.harewoodsolutions.co.uk/press.aspx</span>');
-  alias   |  description  |                 token                  | dictionaries | dictionary |                 lexemes                  
-----------+---------------+----------------------------------------+--------------+------------+------------------------------------------
- protocol | Protocol head | http://                                | {}           |            | 
- url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple     | {www.harewoodsolutions.co.uk/press.aspx}
- host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple     | {www.harewoodsolutions.co.uk}
- url_path | URL path      | /press.aspx                            | {simple}     | simple     | {/press.aspx}
- tag      | XML tag       | </span>                                | {}           |            | 
+  alias   |  description  |                 token                  | dictionaries | configuration | command |                 lexemes                  
+----------+---------------+----------------------------------------+--------------+---------------+---------+------------------------------------------
+ protocol | Protocol head | http://                                | {}           |               |         | 
+ url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk/press.aspx}
+ host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk}
+ url_path | URL path      | /press.aspx                            | {simple}     | simple        | simple  | {/press.aspx}
+ tag      | XML tag       | </span>                                | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
-  alias   |  description  |           token            | dictionaries | dictionary |           lexemes            
-----------+---------------+----------------------------+--------------+------------+------------------------------
- protocol | Protocol head | http://                    | {}           |            | 
- url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple     | {aew.wer0c.ewr/id?ad=qwe&dw}
- host     | Host          | aew.wer0c.ewr              | {simple}     | simple     | {aew.wer0c.ewr}
- url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple     | {/id?ad=qwe&dw}
- tag      | XML tag       | <span>                     | {}           |            | 
+  alias   |  description  |           token            | dictionaries | configuration | command |           lexemes            
+----------+---------------+----------------------------+--------------+---------------+---------+------------------------------
+ protocol | Protocol head | http://                    | {}           |               |         | 
+ url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple        | simple  | {aew.wer0c.ewr/id?ad=qwe&dw}
+ host     | Host          | aew.wer0c.ewr              | {simple}     | simple        | simple  | {aew.wer0c.ewr}
+ url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple        | simple  | {/id?ad=qwe&dw}
+ tag      | XML tag       | <span>                     | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
-  alias   |  description  |        token         | dictionaries | dictionary |        lexemes         
-----------+---------------+----------------------+--------------+------------+------------------------
- protocol | Protocol head | http://              | {}           |            | 
- url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple     | {5aew.werc.ewr:8100/?}
- host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path      | /?                   | {simple}     | simple     | {/?}
+  alias   |  description  |        token         | dictionaries | configuration | command |        lexemes         
+----------+---------------+----------------------+--------------+---------------+---------+------------------------
+ protocol | Protocol head | http://              | {}           |               |         | 
+ url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?}
+ host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path      | /?                   | {simple}     | simple        | simple  | {/?}
 (4 rows)
 
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
-  alias   | description |         token          | dictionaries | dictionary |         lexemes          
-----------+-------------+------------------------+--------------+------------+--------------------------
- url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple     | {5aew.werc.ewr:8100/?xx}
- host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path    | /?xx                   | {simple}     | simple     | {/?xx}
+  alias   | description |         token          | dictionaries | configuration | command |         lexemes          
+----------+-------------+------------------------+--------------+---------------+---------+--------------------------
+ url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?xx}
+ host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path    | /?xx                   | {simple}     | simple        | simple  | {/?xx}
 (3 rows)
 
 SELECT token, alias,
diff --git a/src/test/regress/sql/oidjoins.sql b/src/test/regress/sql/oidjoins.sql
index fcf9990..320e220 100644
--- a/src/test/regress/sql/oidjoins.sql
+++ b/src/test/regress/sql/oidjoins.sql
@@ -541,10 +541,6 @@ SELECT	ctid, mapcfg
 FROM	pg_catalog.pg_ts_config_map fk
 WHERE	mapcfg != 0 AND
 	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_config pk WHERE pk.oid = fk.mapcfg);
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/sql/tsdicts.sql b/src/test/regress/sql/tsdicts.sql
index 1633c0d..8662820 100644
--- a/src/test/regress/sql/tsdicts.sql
+++ b/src/test/regress/sql/tsdicts.sql
@@ -117,6 +117,57 @@ CREATE TEXT SEARCH DICTIONARY thesaurus (
 
 SELECT ts_lexize('thesaurus', 'one');
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+
+SELECT to_tsvector('english_union', 'book');
+SELECT to_tsvector('english_union', 'books');
+SELECT to_tsvector('english_union', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+
+SELECT to_tsvector('english_intersect', 'book');
+SELECT to_tsvector('english_intersect', 'books');
+SELECT to_tsvector('english_intersect', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+
+SELECT to_tsvector('english_except', 'book');
+SELECT to_tsvector('english_except', 'books');
+SELECT to_tsvector('english_except', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+
+SELECT to_tsvector('english_branches', 'book');
+SELECT to_tsvector('english_branches', 'books');
+SELECT to_tsvector('english_branches', 'booking');
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -189,6 +240,43 @@ SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one');
 SELECT to_tsvector('thesaurus_tst', 'Supernovae star is very new star and usually called supernovae (abbreviation SN)');
 SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a tickets');
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/sql/tsearch.sql b/src/test/regress/sql/tsearch.sql
index 1c8520b..6f8af63 100644
--- a/src/test/regress/sql/tsearch.sql
+++ b/src/test/regress/sql/tsearch.sql
@@ -26,9 +26,9 @@ SELECT oid, cfgname
 FROM pg_ts_config
 WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
+WHERE mapcfg = 0;
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
 SELECT * FROM
#21Aleksander Alekseev
a.alekseev@postgrespro.ru
In reply to: Aleksandr Parfenov (#20)
Re: Flexible configuration for full-text search

The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation: tested, passed

LGTM.

The new status of this patch is: Ready for Committer

#22Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Aleksander Alekseev (#21)
1 attachment(s)
Re: Flexible configuration for full-text search

On Fri, 30 Mar 2018 14:43:30 +0000
Aleksander Alekseev <a.alekseev@postgrespro.ru> wrote:

The following review has been posted through the commitfest
application: make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation: tested, passed

LGTM.

The new status of this patch is: Ready for Committer

It seems that after d204ef6 (MERGE SQL Command) in master the patch
doesn't apply due to a conflict in keywords lists (grammar and header).
The new version of the patch without conflicts is attached.

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

Attachments:

0001-flexible-fts-configuration-v10.patchtext/x-patchDownload
diff --git a/contrib/unaccent/expected/unaccent.out b/contrib/unaccent/expected/unaccent.out
index b93105e..37b9337 100644
--- a/contrib/unaccent/expected/unaccent.out
+++ b/contrib/unaccent/expected/unaccent.out
@@ -61,3 +61,14 @@ SELECT ts_lexize('unaccent', '
  {����}
 (1 row)
 
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
+         to_tsvector          
+------------------------------
+ 'foobar':1 '�����':2 '���':3
+(1 row)
+
diff --git a/contrib/unaccent/sql/unaccent.sql b/contrib/unaccent/sql/unaccent.sql
index 3102139..6ce21cd 100644
--- a/contrib/unaccent/sql/unaccent.sql
+++ b/contrib/unaccent/sql/unaccent.sql
@@ -2,7 +2,6 @@ CREATE EXTENSION unaccent;
 
 -- must have a UTF8 database
 SELECT getdatabaseencoding();
-
 SET client_encoding TO 'KOI8';
 
 SELECT unaccent('foobar');
@@ -16,3 +15,12 @@ SELECT unaccent('unaccent', '
 SELECT ts_lexize('unaccent', 'foobar');
 SELECT ts_lexize('unaccent', '����');
 SELECT ts_lexize('unaccent', '����');
+
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml
index ebe0b94..ecc3704 100644
--- a/doc/src/sgml/ref/alter_tsconfig.sgml
+++ b/doc/src/sgml/ref/alter_tsconfig.sgml
@@ -22,8 +22,12 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING REPLACE <replaceable class="parameter">old_dictionary</replaceable> WITH <replaceable class="parameter">new_dictionary</replaceable>
@@ -89,6 +93,17 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
    </varlistentry>
 
    <varlistentry>
+    <term><replaceable class="parameter">config</replaceable></term>
+    <listitem>
+     <para>
+      The dictionaries tree expression. The dictionary expression
+      is a triple of condition/command/else that define way to process
+      the text. The <literal>ELSE</literal> part is optional.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry>
     <term><replaceable class="parameter">old_dictionary</replaceable></term>
     <listitem>
      <para>
@@ -133,7 +148,7 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
      </para>
     </listitem>
    </varlistentry>
- </variablelist>
+  </variablelist>
 
   <para>
    The <literal>ADD MAPPING FOR</literal> form installs a list of dictionaries to be
@@ -155,6 +170,53 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
  </refsect1>
 
  <refsect1>
+  <title>Dictionaries Map Configuration</title>
+
+  <refsect2>
+   <title>Format</title>
+   <para>
+    Formally <replaceable class="parameter">config</replaceable> is one of:
+   </para>
+   <programlisting>
+    * dictionary_name
+
+    * config { UNION | INTERSECT | EXCEPT | MAP } config
+
+    * CASE config
+        WHEN [ NO ] MATCH THEN { KEEP | config }
+        [ ELSE config ]
+      END
+   </programlisting>
+  </refsect2>
+
+  <refsect2>
+   <title>Description</title>
+   <para>
+    <replaceable class="parameter">config</replaceable> can be used
+    in three different formats. The most simple format is name of dictionary to
+    use for tokens processing.
+   </para>
+   <para>
+    In order to use more than one dictionary
+    simultaneously user should interconnect dictionaries by operators. Operators
+    <literal>UNION</literal>, <literal>EXCEPT</literal> and
+    <literal>INTERSECT</literal> have same meaning as in operations on sets.
+    Special operator <literal>MAP</literal> gets output of left subexpression
+    and uses it as an input to right subexpression.
+   </para>
+   <para>
+    The third format of <replaceable class="parameter">config</replaceable> is similar to
+    <literal>CASE/WHEN/THEN/ELSE</literal> structure. It's consists of three
+    replaceable parts. First one is configuration which is used to construct lexemes set
+    for matching condition. If the condition is triggered, the command is executed.
+    Use command <literal>KEEP</literal> to avoid repeating of the same
+    configuration in condition and command part. However, command may differ from
+    the condition. The <literal>ELSE</literal> branch is executed otherwise.
+   </para>
+  </refsect2>
+ </refsect1>
+
+ <refsect1>
   <title>Examples</title>
 
   <para>
@@ -167,6 +229,34 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
 ALTER TEXT SEARCH CONFIGURATION my_config
   ALTER MAPPING REPLACE english WITH swedish;
 </programlisting>
+
+  <para>
+   Next example shows how to analyse documents in both English and German languages.
+   <literal>english_hunspell</literal> and <literal>german_hunspell</literal>
+   return result only if a word is recognized. Otherwise, stemmer dictionaries
+   are used to process a token.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH
+   CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+    UNION
+   CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+    In order to combine search for both exact and processed forms the vector
+    should contain lexemes produced by <literal>simple</literal> for exact form
+    of the word as well as lexemes produced by linguistic-aware dictionary
+    (e.g. <literal>english_stem</literal>) for processed forms.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml
index 610b7bf..1253b41 100644
--- a/doc/src/sgml/textsearch.sgml
+++ b/doc/src/sgml/textsearch.sgml
@@ -732,10 +732,11 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     The <function>to_tsvector</function> function internally calls a parser
     which breaks the document text into tokens and assigns a type to
     each token.  For each token, a list of
-    dictionaries (<xref linkend="textsearch-dictionaries"/>) is consulted,
-    where the list can vary depending on the token type.  The first dictionary
-    that <firstterm>recognizes</firstterm> the token emits one or more normalized
-    <firstterm>lexemes</firstterm> to represent the token.  For example,
+    condition/command pairs is consulted, where the list can vary depending
+    on the token type, condition and command are expressions on dictionaries
+    with matching clause in condition(<xref linkend="textsearch-dictionaries"/>).
+    The first command combined with true-resulted condition emits one or more normalized
+    <firstterm>lexemes</firstterm> to represent the token. For example,
     <literal>rats</literal> became <literal>rat</literal> because one of the
     dictionaries recognized that the word <literal>rats</literal> is a plural
     form of <literal>rat</literal>.  Some words are recognized as
@@ -743,7 +744,7 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     causes them to be ignored since they occur too frequently to be useful in
     searching.  In our example these are
     <literal>a</literal>, <literal>on</literal>, and <literal>it</literal>.
-    If no dictionary in the list recognizes the token then it is also ignored.
+    If none of conditions is <literal>true</literal> the token is ignored.
     In this example that happened to the punctuation sign <literal>-</literal>
     because there are in fact no dictionaries assigned for its token type
     (<literal>Space symbols</literal>), meaning space tokens will never be
@@ -2232,8 +2233,8 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
      <para>
       a single lexeme with the <literal>TSL_FILTER</literal> flag set, to replace
       the original token with a new token to be passed to subsequent
-      dictionaries (a dictionary that does this is called a
-      <firstterm>filtering dictionary</firstterm>)
+      dictionaries in a comma-separated syntax (a dictionary that does this
+      is called a <firstterm>filtering dictionary</firstterm>)
      </para>
     </listitem>
     <listitem>
@@ -2265,38 +2266,126 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
    type that the parser can return, a separate list of dictionaries is
    specified by the configuration.  When a token of that type is found
    by the parser, each dictionary in the list is consulted in turn,
-   until some dictionary recognizes it as a known word.  If it is identified
-   as a stop word, or if no dictionary recognizes the token, it will be
-   discarded and not indexed or searched for.
-   Normally, the first dictionary that returns a non-<literal>NULL</literal>
-   output determines the result, and any remaining dictionaries are not
-   consulted; but a filtering dictionary can replace the given word
-   with a modified word, which is then passed to subsequent dictionaries.
+   until command is not selected based on its condition. If none of cases is
+   selected token will be discarded and not indexed or searched for.
   </para>
 
   <para>
-   The general rule for configuring a list of dictionaries
-   is to place first the most narrow, most specific dictionary, then the more
-   general dictionaries, finishing with a very general dictionary, like
+   A tree of cases is described as condition/command/else triples. Each
+   condition is evaluated in order to select appropriate command to generate
+   resulted set of lexemes.
+  </para>
+
+  <para>
+   A condition is an expression with dictionaries used as operands and
+   basic set operators <literal>UNION</literal>, <literal>EXCEPT</literal>, <literal>INTERSECT</literal>
+   and special operator <literal>MAP</literal>.
+   Special operator <literal>MAP</literal> use output of left subexpression as
+   input for right subexpression.
+  </para>
+
+  <para>
+    Rules to write command are same as for condition with additional keyword
+    <literal>KEEP</literal> considered to use the result of the condition as an output.
+  </para>
+
+  <para>
+   A comma-separated list of dictionaries is a simplified variant of text
+   search configuration. Each dictionary consulted to process a token and first
+   non-<literal>NULL</literal> output is accepted as a processing result.
+  </para>
+
+  <para>
+   The general rule for configuring tokens processing
+   is to place first case with the most narrow, most specific dictionary, then the more
+   general dictionaries, finishing with a very general dictionaries, like
    a <application>Snowball</application> stemmer or <literal>simple</literal>, which
-   recognizes everything.  For example, for an astronomy-specific search
+   recognizes everything. For example, for an astronomy-specific search
    (<literal>astro_en</literal> configuration) one could bind token type
    <type>asciiword</type> (ASCII word) to a synonym dictionary of astronomical
    terms, a general English dictionary and a <application>Snowball</application> English
-   stemmer:
+   stemmer in comma-separated variant of mapping:
+  </para>
 
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION astro_en
     ADD MAPPING FOR asciiword WITH astrosyn, english_ispell, english_stem;
 </programlisting>
+
+  <para>
+   Another example is a configuration for both English and German languages via
+   operator-separated variant of mapping:
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION multi_en_de
+    ADD MAPPING FOR asciiword, word WITH
+        CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+         UNION
+        CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+   This configuration provides an ability to search on collection of multilingual
+   documents without specifying language:
+  </para>
+
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'lack');
+ id |                   txt
+----+-----------------------------------------
+  2 | with old stars and lacking gas and dust
+
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'beginnen');
+ id |                 txt
+----+-------------------------------------
+  1 | Das geschah zu Beginn dieses Monats
+</programlisting>
+
+  <para>
+   A combination of stemmer dictionary with <literal>simple</literal> one may be used to mix
+   search for exact form of one word and linguistic search for others.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION exact_and_linguistic
+    ADD MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
+  <para>
+   In the following example a simple dictionary is used to prevent words from normalization in query.
   </para>
 
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Supernova star'),
+                              (2, 'Supernova stars'))
+SELECT * FROM docs WHERE to_tsvector('exact_and_linguistic', txt) @@ (to_tsquery('simple', 'stars') &amp;&amp; to_tsquery('english', 'supernovae'));
+ id |       txt       
+----+-----------------
+  2 | Supernova stars
+</programlisting>
+
+   <caution>
+    <para>
+     Due to lack of information about origin of each lexeme in <literal>tsvector</literal> may
+     lead to false-positive triggers in case of stemmed form being used as exact form in a query.
+    </para>
+   </caution>
+
   <para>
-   A filtering dictionary can be placed anywhere in the list, except at the
-   end where it'd be useless.  Filtering dictionaries are useful to partially
+   Filtering dictionaries are useful to partially
    normalize words to simplify the task of later dictionaries.  For example,
    a filtering dictionary could be used to remove accents from accented
    letters, as is done by the <xref linkend="unaccent"/> module.
+   Filter dictionary should be placed at left of <literal>MAP</literal>
+   operator. If filter dictionary returns <literal>NULL</literal> it pass initial token
+   to the right subexpression.
   </para>
 
   <sect2 id="textsearch-stopwords">
@@ -2463,9 +2552,9 @@ SELECT ts_lexize('public.simple_dict','The');
 
 <screen>
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | Paris | {english_stem} | english_stem  | english_stem | {pari}
 
 CREATE TEXT SEARCH DICTIONARY my_synonym (
     TEMPLATE = synonym,
@@ -2477,9 +2566,12 @@ ALTER TEXT SEARCH CONFIGURATION english
     WITH my_synonym, english_stem;
 
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |       dictionaries        | dictionary | lexemes 
------------+-----------------+-------+---------------------------+------------+---------
- asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris}
+   alias   |   description   | token |       dictionaries        |                configuration                |  command   | lexemes 
+-----------+-----------------+-------+---------------------------+---------------------------------------------+------------+---------
+ asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | CASE my_synonym WHEN MATCH THEN KEEP       +| my_synonym | {paris}
+           |                 |       |                           | ELSE CASE english_stem WHEN MATCH THEN KEEP+|            | 
+           |                 |       |                           | END                                        +|            | 
+           |                 |       |                           | END                                         |            | 
 </screen>
    </para>
 
@@ -3108,6 +3200,21 @@ CREATE TEXT SEARCH DICTIONARY english_ispell (
 ALTER TEXT SEARCH CONFIGURATION pg
     ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
                       word, hword, hword_part
+    WITH 
+      CASE pg_dict WHEN MATCH THEN KEEP
+      ELSE
+          CASE english_ispell WHEN MATCH THEN KEEP
+          ELSE english_stem
+          END
+      END;
+</programlisting>
+
+    Or use alternative comma-separated syntax:
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION pg
+    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
+                      word, hword, hword_part
     WITH pg_dict, english_ispell, english_stem;
 </programlisting>
 
@@ -3183,7 +3290,8 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
          OUT <replaceable class="parameter">description</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">token</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">dictionaries</replaceable> <type>regdictionary[]</type>,
-         OUT <replaceable class="parameter">dictionary</replaceable> <type>regdictionary</type>,
+         OUT <replaceable class="parameter">configuration</replaceable> <type>text</type>,
+         OUT <replaceable class="parameter">command</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)
          returns setof record
 </synopsis>
@@ -3227,14 +3335,20 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionary</replaceable> <type>regdictionary</type> &mdash; the dictionary
-       that recognized the token, or <literal>NULL</literal> if none did
+       <replaceable>configuration</replaceable> <type>text</type> &mdash; the
+       configuration defined for this token type
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       <replaceable>command</replaceable> <type>text</type> &mdash; the command that describes
+       the way the output was produced
       </para>
      </listitem>
      <listitem>
       <para>
        <replaceable>lexemes</replaceable> <type>text[]</type> &mdash; the lexeme(s) produced
-       by the dictionary that recognized the token, or <literal>NULL</literal> if
+       by the command selected according conditions, or <literal>NULL</literal> if
        none did; an empty array (<literal>{}</literal>) means it was recognized as a
        stop word
       </para>
@@ -3247,32 +3361,32 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
 
 <screen>
 SELECT * FROM ts_debug('english','a fat  cat sat on a mat - it ate a fat rats');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | cat   | {english_stem} | english_stem | {cat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | sat   | {english_stem} | english_stem | {sat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | on    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | mat   | {english_stem} | english_stem | {mat}
- blank     | Space symbols   |       | {}             |              | 
- blank     | Space symbols   | -     | {}             |              | 
- asciiword | Word, all ASCII | it    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | ate   | {english_stem} | english_stem | {ate}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | rats  | {english_stem} | english_stem | {rat}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | cat   | {english_stem} | english_stem  | english_stem | {cat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | sat   | {english_stem} | english_stem  | english_stem | {sat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | on    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | mat   | {english_stem} | english_stem  | english_stem | {mat}
+ blank     | Space symbols   |       |                |               |              | 
+ blank     | Space symbols   | -     |                |               |              | 
+ asciiword | Word, all ASCII | it    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | ate   | {english_stem} | english_stem  | english_stem | {ate}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | rats  | {english_stem} | english_stem  | english_stem | {rat}
 </screen>
   </para>
 
@@ -3298,13 +3412,22 @@ ALTER TEXT SEARCH CONFIGURATION public.english
 
 <screen>
 SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
-   alias   |   description   |    token    |         dictionaries          |   dictionary   |   lexemes   
------------+-----------------+-------------+-------------------------------+----------------+-------------
- asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | english_ispell | {}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | english_ispell | {bright}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | english_stem   | {supernova}
+   alias   |   description   |    token    |         dictionaries          |                configuration                |     command      |   lexemes   
+-----------+-----------------+-------------+-------------------------------+---------------------------------------------+------------------+-------------
+ asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {bright}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_stem     | {supernova}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
 </screen>
 
   <para>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index e9e1886..34b80ae 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -948,55 +948,14 @@ GRANT SELECT (subdbid, subname, subowner, subenabled, subslotname, subpublicatio
 -- Tsearch debug function.  Defined here because it'd be pretty unwieldy
 -- to put it into pg_proc.h
 
-CREATE FUNCTION ts_debug(IN config regconfig, IN document text,
-    OUT alias text,
-    OUT description text,
-    OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
-    OUT lexemes text[])
-RETURNS SETOF record AS
-$$
-SELECT
-    tt.alias AS alias,
-    tt.description AS description,
-    parse.token AS token,
-    ARRAY ( SELECT m.mapdict::pg_catalog.regdictionary
-            FROM pg_catalog.pg_ts_config_map AS m
-            WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-            ORDER BY m.mapseqno )
-    AS dictionaries,
-    ( SELECT mapdict::pg_catalog.regdictionary
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS dictionary,
-    ( SELECT pg_catalog.ts_lexize(mapdict, parse.token)
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS lexemes
-FROM pg_catalog.ts_parse(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 ), $2
-    ) AS parse,
-     pg_catalog.ts_token_type(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 )
-    ) AS tt
-WHERE tt.tokid = parse.tokid
-$$
-LANGUAGE SQL STRICT STABLE PARALLEL SAFE;
-
-COMMENT ON FUNCTION ts_debug(regconfig,text) IS
-    'debug function for text search configuration';
 
 CREATE FUNCTION ts_debug(IN document text,
     OUT alias text,
     OUT description text,
     OUT token text,
     OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
+    OUT configuration text,
+    OUT command text,
     OUT lexemes text[])
 RETURNS SETOF record AS
 $$
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index 3a84351..53ee576 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -39,9 +39,12 @@
 #include "nodes/makefuncs.h"
 #include "parser/parse_func.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_public.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/jsonb.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "utils/syscache.h"
@@ -935,11 +938,22 @@ makeConfigurationDependencies(HeapTuple tuple, bool removeOld,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			TSMapElement *mapdicts = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			Oid		   *dictionaryOids = TSMapGetDictionaries(mapdicts);
+			Oid		   *currentOid = dictionaryOids;
 
-			referenced.classId = TSDictionaryRelationId;
-			referenced.objectId = cfgmap->mapdict;
-			referenced.objectSubId = 0;
-			add_exact_object_address(&referenced, addrs);
+			while (*currentOid != InvalidOid)
+			{
+				referenced.classId = TSDictionaryRelationId;
+				referenced.objectId = *currentOid;
+				referenced.objectSubId = 0;
+				add_exact_object_address(&referenced, addrs);
+
+				currentOid++;
+			}
+
+			pfree(dictionaryOids);
+			TSMapElementFree(mapdicts);
 		}
 
 		systable_endscan(scan);
@@ -1091,8 +1105,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			mapvalues[Anum_pg_ts_config_map_mapcfg - 1] = cfgOid;
 			mapvalues[Anum_pg_ts_config_map_maptokentype - 1] = cfgmap->maptokentype;
-			mapvalues[Anum_pg_ts_config_map_mapseqno - 1] = cfgmap->mapseqno;
-			mapvalues[Anum_pg_ts_config_map_mapdict - 1] = cfgmap->mapdict;
+			mapvalues[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(&cfgmap->mapdicts);
 
 			newmaptup = heap_form_tuple(mapRel->rd_att, mapvalues, mapnulls);
 
@@ -1195,7 +1208,7 @@ AlterTSConfiguration(AlterTSConfigurationStmt *stmt)
 	relMap = heap_open(TSConfigMapRelationId, RowExclusiveLock);
 
 	/* Add or drop mappings */
-	if (stmt->dicts)
+	if (stmt->dicts || stmt->dict_map)
 		MakeConfigurationMapping(stmt, tup, relMap);
 	else if (stmt->tokentype)
 		DropConfigurationMapping(stmt, tup, relMap);
@@ -1271,6 +1284,59 @@ getTokenTypes(Oid prsId, List *tokennames)
 }
 
 /*
+ * Parse parse node extracted from dictionary mapping and transform it into
+ * internal representation of dictionary mapping.
+ */
+static TSMapElement *
+ParseTSMapConfig(DictMapElem *elem)
+{
+	TSMapElement *result = palloc0(sizeof(TSMapElement));
+
+	if (elem->kind == DICT_MAP_CASE)
+	{
+		TSMapCase  *caseObject = palloc0(sizeof(TSMapCase));
+		DictMapCase *caseASTObject = elem->data;
+
+		caseObject->condition = ParseTSMapConfig(caseASTObject->condition);
+		caseObject->command = ParseTSMapConfig(caseASTObject->command);
+
+		if (caseASTObject->elsebranch)
+			caseObject->elsebranch = ParseTSMapConfig(caseASTObject->elsebranch);
+
+		caseObject->match = caseASTObject->match;
+
+		caseObject->condition->parent = result;
+		caseObject->command->parent = result;
+
+		result->type = TSMAP_CASE;
+		result->value.objectCase = caseObject;
+	}
+	else if (elem->kind == DICT_MAP_EXPRESSION)
+	{
+		TSMapExpression *expression = palloc0(sizeof(TSMapExpression));
+		DictMapExprElem *expressionAST = elem->data;
+
+		expression->left = ParseTSMapConfig(expressionAST->left);
+		expression->right = ParseTSMapConfig(expressionAST->right);
+		expression->operator = expressionAST->oper;
+
+		result->type = TSMAP_EXPRESSION;
+		result->value.objectExpression = expression;
+	}
+	else if (elem->kind == DICT_MAP_KEEP)
+	{
+		result->value.objectExpression = NULL;
+		result->type = TSMAP_KEEP;
+	}
+	else if (elem->kind == DICT_MAP_DICTIONARY)
+	{
+		result->value.objectDictionary = get_ts_dict_oid(elem->data, false);
+		result->type = TSMAP_DICTIONARY;
+	}
+	return result;
+}
+
+/*
  * ALTER TEXT SEARCH CONFIGURATION ADD/ALTER MAPPING
  */
 static void
@@ -1286,8 +1352,9 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	Oid			prsId;
 	int		   *tokens,
 				ntoken;
-	Oid		   *dictIds;
-	int			ndict;
+	Oid		   *dictIds = NULL;
+	int			ndict = 0;
+	TSMapElement *config = NULL;
 	ListCell   *c;
 
 	prsId = ((Form_pg_ts_config) GETSTRUCT(tup))->cfgparser;
@@ -1326,15 +1393,18 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	/*
 	 * Convert list of dictionary names to array of dict OIDs
 	 */
-	ndict = list_length(stmt->dicts);
-	dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
-	i = 0;
-	foreach(c, stmt->dicts)
+	if (stmt->dicts)
 	{
-		List	   *names = (List *) lfirst(c);
+		ndict = list_length(stmt->dicts);
+		dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
+		i = 0;
+		foreach(c, stmt->dicts)
+		{
+			List	   *names = (List *) lfirst(c);
 
-		dictIds[i] = get_ts_dict_oid(names, false);
-		i++;
+			dictIds[i] = get_ts_dict_oid(names, false);
+			i++;
+		}
 	}
 
 	if (stmt->replace)
@@ -1356,6 +1426,10 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			Datum		repl_val[Natts_pg_ts_config_map];
+			bool		repl_null[Natts_pg_ts_config_map];
+			bool		repl_repl[Natts_pg_ts_config_map];
+			HeapTuple	newtup;
 
 			/*
 			 * check if it's one of target token types
@@ -1379,25 +1453,21 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 			/*
 			 * replace dictionary if match
 			 */
-			if (cfgmap->mapdict == dictOld)
-			{
-				Datum		repl_val[Natts_pg_ts_config_map];
-				bool		repl_null[Natts_pg_ts_config_map];
-				bool		repl_repl[Natts_pg_ts_config_map];
-				HeapTuple	newtup;
-
-				memset(repl_val, 0, sizeof(repl_val));
-				memset(repl_null, false, sizeof(repl_null));
-				memset(repl_repl, false, sizeof(repl_repl));
-
-				repl_val[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictNew);
-				repl_repl[Anum_pg_ts_config_map_mapdict - 1] = true;
-
-				newtup = heap_modify_tuple(maptup,
-										   RelationGetDescr(relMap),
-										   repl_val, repl_null, repl_repl);
-				CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
-			}
+			config = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			TSMapReplaceDictionary(config, dictOld, dictNew);
+
+			memset(repl_val, 0, sizeof(repl_val));
+			memset(repl_null, false, sizeof(repl_null));
+			memset(repl_repl, false, sizeof(repl_repl));
+
+			repl_val[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
+			repl_repl[Anum_pg_ts_config_map_mapdicts - 1] = true;
+
+			newtup = heap_modify_tuple(maptup,
+									   RelationGetDescr(relMap),
+									   repl_val, repl_null, repl_repl);
+			CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
+			pfree(config);
 		}
 
 		systable_endscan(scan);
@@ -1407,24 +1477,22 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		/*
 		 * Insertion of new entries
 		 */
+		config = ParseTSMapConfig(stmt->dict_map);
+
 		for (i = 0; i < ntoken; i++)
 		{
-			for (j = 0; j < ndict; j++)
-			{
-				Datum		values[Natts_pg_ts_config_map];
-				bool		nulls[Natts_pg_ts_config_map];
+			Datum		values[Natts_pg_ts_config_map];
+			bool		nulls[Natts_pg_ts_config_map];
 
-				memset(nulls, false, sizeof(nulls));
-				values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
-				values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
-				values[Anum_pg_ts_config_map_mapseqno - 1] = Int32GetDatum(j + 1);
-				values[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictIds[j]);
+			memset(nulls, false, sizeof(nulls));
+			values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
+			values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
+			values[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
 
-				tup = heap_form_tuple(relMap->rd_att, values, nulls);
-				CatalogTupleInsert(relMap, tup);
+			tup = heap_form_tuple(relMap->rd_att, values, nulls);
+			CatalogTupleInsert(relMap, tup);
 
-				heap_freetuple(tup);
-			}
+			heap_freetuple(tup);
 		}
 	}
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index c3efca3..a2235c3 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4439,6 +4439,42 @@ _copyReassignOwnedStmt(const ReassignOwnedStmt *from)
 	return newnode;
 }
 
+static DictMapElem *
+_copyDictMapElem(const DictMapElem *from)
+{
+	DictMapElem *newnode = makeNode(DictMapElem);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(data);
+
+	return newnode;
+}
+
+static DictMapExprElem *
+_copyDictMapExprElem(const DictMapExprElem *from)
+{
+	DictMapExprElem *newnode = makeNode(DictMapExprElem);
+
+	COPY_NODE_FIELD(left);
+	COPY_NODE_FIELD(right);
+	COPY_SCALAR_FIELD(oper);
+
+	return newnode;
+}
+
+static DictMapCase *
+_copyDictMapCase(const DictMapCase *from)
+{
+	DictMapCase *newnode = makeNode(DictMapCase);
+
+	COPY_NODE_FIELD(condition);
+	COPY_NODE_FIELD(command);
+	COPY_NODE_FIELD(elsebranch);
+	COPY_SCALAR_FIELD(match);
+
+	return newnode;
+}
+
 static AlterTSDictionaryStmt *
 _copyAlterTSDictionaryStmt(const AlterTSDictionaryStmt *from)
 {
@@ -5452,6 +5488,15 @@ copyObjectImpl(const void *from)
 		case T_ReassignOwnedStmt:
 			retval = _copyReassignOwnedStmt(from);
 			break;
+		case T_DictMapExprElem:
+			retval = _copyDictMapExprElem(from);
+			break;
+		case T_DictMapElem:
+			retval = _copyDictMapElem(from);
+			break;
+		case T_DictMapCase:
+			retval = _copyDictMapCase(from);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _copyAlterTSDictionaryStmt(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 45ceba2..71a8f9b 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2218,6 +2218,36 @@ _equalReassignOwnedStmt(const ReassignOwnedStmt *a, const ReassignOwnedStmt *b)
 }
 
 static bool
+_equalDictMapElem(const DictMapElem *a, const DictMapElem *b)
+{
+	COMPARE_NODE_FIELD(data);
+	COMPARE_SCALAR_FIELD(kind);
+
+	return true;
+}
+
+static bool
+_equalDictMapExprElem(const DictMapExprElem *a, const DictMapExprElem *b)
+{
+	COMPARE_NODE_FIELD(left);
+	COMPARE_NODE_FIELD(right);
+	COMPARE_SCALAR_FIELD(oper);
+
+	return true;
+}
+
+static bool
+_equalDictMapCase(const DictMapCase *a, const DictMapCase *b)
+{
+	COMPARE_NODE_FIELD(condition);
+	COMPARE_NODE_FIELD(command);
+	COMPARE_NODE_FIELD(elsebranch);
+	COMPARE_SCALAR_FIELD(match);
+
+	return true;
+}
+
+static bool
 _equalAlterTSDictionaryStmt(const AlterTSDictionaryStmt *a, const AlterTSDictionaryStmt *b)
 {
 	COMPARE_NODE_FIELD(dictname);
@@ -3575,6 +3605,15 @@ equal(const void *a, const void *b)
 		case T_ReassignOwnedStmt:
 			retval = _equalReassignOwnedStmt(a, b);
 			break;
+		case T_DictMapExprElem:
+			retval = _equalDictMapExprElem(a, b);
+			break;
+		case T_DictMapElem:
+			retval = _equalDictMapElem(a, b);
+			break;
+		case T_DictMapCase:
+			retval = _equalDictMapCase(a, b);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _equalAlterTSDictionaryStmt(a, b);
 			break;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index b879358..16a63d3 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -52,6 +52,7 @@
 #include "catalog/namespace.h"
 #include "catalog/pg_am.h"
 #include "catalog/pg_trigger.h"
+#include "catalog/pg_ts_config_map.h"
 #include "commands/defrem.h"
 #include "commands/trigger.h"
 #include "nodes/makefuncs.h"
@@ -241,6 +242,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionSpec		*partspec;
 	PartitionBoundSpec	*partboundspec;
 	RoleSpec			*rolespec;
+	DictMapElem			*dmapelem;
 }
 
 %type <node>	stmt schema_stmt
@@ -310,7 +312,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				analyze_option_list analyze_option_elem
 %type <boolean>	opt_or_replace
 				opt_grant_grant_option opt_grant_admin_option
-				opt_nowait opt_if_exists opt_with_data
+				opt_nowait opt_if_exists opt_with_data opt_dictionary_map_no
 %type <ival>	opt_nowait_or_skip
 
 %type <list>	OptRoleList AlterOptRoleList
@@ -585,6 +587,12 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>		hash_partbound partbound_datum_list range_datum_list
 %type <defelt>		hash_partbound_elem
 
+%type <ival>		dictionary_map_set_expr_operator
+%type <dmapelem>	dictionary_map_dict dictionary_map_command_expr_paren
+					dictionary_map_set_expr dictionary_map_case
+					dictionary_map_action opt_dictionary_map_case_else
+					dictionary_config dictionary_config_comma
+
 %type <node>	merge_when_clause opt_and_condition
 %type <list>	merge_when_list
 %type <node>	merge_update merge_delete merge_insert
@@ -650,13 +658,13 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	JOIN
 
-	KEY
+	KEEP KEY
 
 	LABEL LANGUAGE LARGE_P LAST_P LATERAL_P
 	LEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL
 	LOCALTIME LOCALTIMESTAMP LOCATION LOCK_P LOCKED LOGGED
 
-	MAPPING MATCH MATCHED MATERIALIZED MAXVALUE MERGE METHOD
+	MAP MAPPING MATCH MATCHED MATERIALIZED MAXVALUE MERGE METHOD
 	MINUTE_P MINVALUE MODE MONTH_P MOVE
 
 	NAME_P NAMES NATIONAL NATURAL NCHAR NEW NEXT NO NONE
@@ -10355,24 +10363,26 @@ AlterTSDictionaryStmt:
 		;
 
 AlterTSConfigurationStmt:
-			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with any_name_list
+			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ADD_MAPPING;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = false;
 					n->replace = false;
 					$$ = (Node*)n;
 				}
-			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with any_name_list
+			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ALTER_MAPPING_FOR_TOKEN;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = true;
 					n->replace = false;
 					$$ = (Node*)n;
@@ -10424,6 +10434,117 @@ any_with:	WITH									{}
 			| WITH_LA								{}
 		;
 
+opt_dictionary_map_no:
+			NO { $$ = true; }
+			| { $$ = false; }
+		;
+
+dictionary_config_comma:
+			dictionary_map_dict { $$ = $1; }
+			| dictionary_map_dict ',' dictionary_config_comma
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = TSMAP_OP_COMMA;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_config:
+			dictionary_map_set_expr { $$ = $1; }
+			| dictionary_map_dict ',' dictionary_config_comma
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = TSMAP_OP_COMMA;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_action:
+			KEEP
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_KEEP;
+				n->data = NULL;
+				$$ = n;
+			}
+			| dictionary_map_set_expr { $$ = $1; }
+		;
+
+opt_dictionary_map_case_else:
+			ELSE dictionary_map_set_expr { $$ = $2; }
+			| { $$ = NULL; }
+		;
+
+dictionary_map_case:
+			CASE dictionary_map_set_expr WHEN opt_dictionary_map_no MATCH THEN dictionary_map_action opt_dictionary_map_case_else END_P
+			{
+				DictMapCase *n = makeNode(DictMapCase);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->condition = $2;
+				n->command = $7;
+				n->elsebranch = $8;
+				n->match = !$4;
+
+				r->kind = DICT_MAP_CASE;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_set_expr_operator:
+			UNION { $$ = TSMAP_OP_UNION; }
+			| EXCEPT { $$ = TSMAP_OP_EXCEPT; }
+			| INTERSECT { $$ = TSMAP_OP_INTERSECT; }
+			| MAP { $$ = TSMAP_OP_MAP; }
+		;
+
+dictionary_map_set_expr:
+			dictionary_map_command_expr_paren { $$ = $1; }
+			| dictionary_map_set_expr dictionary_map_set_expr_operator dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = $2;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_command_expr_paren:
+			'(' dictionary_map_set_expr ')'	{ $$ = $2; }
+			| dictionary_map_dict			{ $$ = $1; }
+			| dictionary_map_case			{ $$ = $1; }
+		;
+
+dictionary_map_dict:
+			any_name
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_DICTIONARY;
+				n->data = $1;
+				$$ = n;
+			}
+		;
 
 /*****************************************************************************
  *
@@ -15241,6 +15362,7 @@ unreserved_keyword:
 			| LOCK_P
 			| LOCKED
 			| LOGGED
+			| MAP
 			| MAPPING
 			| MATCH
 			| MATCHED
@@ -15549,6 +15671,7 @@ reserved_keyword:
 			| INITIALLY
 			| INTERSECT
 			| INTO
+			| KEEP
 			| LATERAL_P
 			| LEADING
 			| LIMIT
diff --git a/src/backend/tsearch/Makefile b/src/backend/tsearch/Makefile
index 227468a..e61ad4f 100644
--- a/src/backend/tsearch/Makefile
+++ b/src/backend/tsearch/Makefile
@@ -26,7 +26,7 @@ DICTFILES_PATH=$(addprefix dicts/,$(DICTFILES))
 OBJS = ts_locale.o ts_parse.o wparser.o wparser_def.o dict.o \
 	dict_simple.o dict_synonym.o dict_thesaurus.o \
 	dict_ispell.o regis.o spell.o \
-	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o
+	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o ts_configmap.o
 
 include $(top_srcdir)/src/backend/common.mk
 
diff --git a/src/backend/tsearch/ts_configmap.c b/src/backend/tsearch/ts_configmap.c
new file mode 100644
index 0000000..51860ff
--- /dev/null
+++ b/src/backend/tsearch/ts_configmap.c
@@ -0,0 +1,1094 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.c
+ *		internal representation of text search configuration and utilities for it
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/tsearch/ts_confimap.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include <ctype.h>
+
+#include "access/heapam.h"
+#include "access/genam.h"
+#include "access/htup_details.h"
+#include "access/sysattr.h"
+#include "catalog/indexing.h"
+#include "catalog/pg_ts_dict.h"
+#include "catalog/pg_namespace.h"
+#include "catalog/namespace.h"
+#include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "utils/fmgroids.h"
+
+/*
+ * Size selected arbitrary, based on assumption that 1024 frames of stack
+ * is enough for parsing of configurations
+ */
+#define JSONB_PARSE_STATE_STACK_SIZE 1024
+
+/*
+ * Used during the parsing of TSMapElement from JSONB into internal
+ * data structures.
+ */
+typedef enum TSMapParseState
+{
+	TSMPS_WAIT_ELEMENT,
+	TSMPS_READ_DICT_OID,
+	TSMPS_READ_COMPLEX_OBJ,
+	TSMPS_READ_EXPRESSION,
+	TSMPS_READ_CASE,
+	TSMPS_READ_OPERATOR,
+	TSMPS_READ_COMMAND,
+	TSMPS_READ_CONDITION,
+	TSMPS_READ_ELSEBRANCH,
+	TSMPS_READ_MATCH,
+	TSMPS_READ_KEEP,
+	TSMPS_READ_LEFT,
+	TSMPS_READ_RIGHT
+} TSMapParseState;
+
+/*
+ * Context used during JSONB parsing to construct a TSMap
+ */
+typedef struct TSMapJsonbParseData
+{
+	TSMapParseState states[JSONB_PARSE_STATE_STACK_SIZE];	/* Stack of states of
+															 * JSONB parsing
+															 * automaton */
+	int			statesIndex;	/* Index of current stack frame */
+	TSMapElement *element;		/* Element that is in construction now */
+} TSMapJsonbParseData;
+
+static JsonbValue *TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState);
+static TSMapElement * JsonbToTSMapElement(JsonbContainer *root);
+
+/*
+ * Print name of the namespace into StringInfo variable result
+ */
+static void
+TSMapPrintNamespace(Oid  namespaceId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_namespace namespace;
+
+	if (false)
+		return;
+
+	maprel = heap_open(NamespaceRelationId, AccessShareLock);
+	mapidx = index_open(NamespaceOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(namespaceId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	namespace = (Form_pg_namespace) GETSTRUCT(maptup);
+	appendStringInfoString(result, namespace->nspname.data);
+	appendStringInfoChar(result, '.');
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print name of the dictionary into StringInfo variable result
+ */
+void
+TSMapPrintDictName(Oid dictId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_ts_dict dict;
+
+	if (false)
+		return;
+maprel = heap_open(TSDictionaryRelationId, AccessShareLock);
+	mapidx = index_open(TSDictionaryOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(dictId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	dict = (Form_pg_ts_dict) GETSTRUCT(maptup);
+	if (!TSDictionaryIsVisible(dictId))
+	{
+		TSMapPrintNamespace(dict->dictnamespace, result);
+	}
+	appendStringInfoString(result, dict->dictname.data);
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print the expression into StringInfo variable result
+ */
+static void
+TSMapPrintExpression(TSMapExpression *expression, StringInfo result)
+{
+
+	if (expression->left)
+		TSMapPrintElement(expression->left, result);
+
+	switch (expression->operator)
+	{
+		case TSMAP_OP_UNION:
+			appendStringInfoString(result, " UNION ");
+			break;
+		case TSMAP_OP_EXCEPT:
+			appendStringInfoString(result, " EXCEPT ");
+			break;
+		case TSMAP_OP_INTERSECT:
+			appendStringInfoString(result, " INTERSECT ");
+			break;
+		case TSMAP_OP_COMMA:
+			appendStringInfoString(result, ", ");
+			break;
+		case TSMAP_OP_MAP:
+			appendStringInfoString(result, " MAP ");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains invalid expression operator.")));
+			break;
+	}
+
+	if (expression->right)
+		TSMapPrintElement(expression->right, result);
+}
+
+/*
+ * Print the case configuration construction into StringInfo variable result
+ */
+static void
+TSMapPrintCase(TSMapCase *caseObject, StringInfo result)
+{
+	appendStringInfoString(result, "CASE ");
+
+	TSMapPrintElement(caseObject->condition, result);
+
+	appendStringInfoString(result, " WHEN ");
+	if (!caseObject->match)
+		appendStringInfoString(result, "NO ");
+	appendStringInfoString(result, "MATCH THEN ");
+
+	TSMapPrintElement(caseObject->command, result);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		appendStringInfoString(result, "\nELSE ");
+		TSMapPrintElement(caseObject->elsebranch, result);
+	}
+	appendStringInfoString(result, "\nEND");
+}
+
+/*
+ * Print the element into StringInfo result.
+ * Uses other function and serves for element type detection.
+ */
+void
+TSMapPrintElement(TSMapElement *element, StringInfo result)
+{
+	switch (element->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapPrintExpression(element->value.objectExpression, result);
+			break;
+		case TSMAP_DICTIONARY:
+			TSMapPrintDictName(element->value.objectDictionary, result);
+			break;
+		case TSMAP_CASE:
+			TSMapPrintCase(element->value.objectCase, result);
+			break;
+		case TSMAP_KEEP:
+			appendStringInfoString(result, "KEEP");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains elements with invalid type.")));
+			break;
+	}
+}
+
+/*
+ * Print the text search configuration as a text.
+ */
+Datum
+dictionary_mapping_to_text(PG_FUNCTION_ARGS)
+{
+	Oid			cfgOid = PG_GETARG_OID(0);
+	int32		tokentype = PG_GETARG_INT32(1);
+	StringInfo	rawResult;
+	text	   *result = NULL;
+	TSConfigCacheEntry *cacheEntry;
+
+	cacheEntry = lookup_ts_config_cache(cfgOid);
+	rawResult = makeStringInfo();
+	initStringInfo(rawResult);
+
+	if (cacheEntry->lenmap > tokentype && cacheEntry->map[tokentype] != NULL)
+	{
+		TSMapElement *element = cacheEntry->map[tokentype];
+
+		TSMapPrintElement(element, rawResult);
+	}
+
+	result = cstring_to_text(rawResult->data);
+	pfree(rawResult);
+	PG_RETURN_TEXT_P(result);
+}
+
+/* ----------------
+ * Functions used to convert TSMap structure into JSONB representation
+ * ----------------
+ */
+
+/*
+ * Convert an integer value into JsonbValue
+ */
+static JsonbValue *
+IntToJsonbValue(int intValue)
+{
+	char		buffer[16];
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	/*
+	 * String size is based on limit of int capacity up to 12 chars with sign
+	 * and NULL-character
+	 */
+	memset(buffer, 0, sizeof(char) * 12);
+
+	pg_ltoa(intValue, buffer);
+	value->type = jbvNumeric;
+	value->val.numeric = DatumGetNumeric(DirectFunctionCall3(numeric_in,
+															 CStringGetDatum(buffer),
+															 ObjectIdGetDatum(InvalidOid),
+															 Int32GetDatum(-1)
+															 ));
+	return value;
+}
+
+/*
+ * Convert a FTS configuration expression into JsonbValue
+ */
+static JsonbValue *
+TSMapExpressionToJsonbValue(TSMapExpression *expression, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("operator");
+	key.val.string.val = "operator";
+	value = IntToJsonbValue(expression->operator);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("left");
+	key.val.string.val = "left";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->left, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("right");
+	key.val.string.val = "right";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->right, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS configuration case into JsonbValue
+ */
+static JsonbValue *
+TSMapCaseToJsonbValue(TSMapCase *caseObject, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("condition");
+	key.val.string.val = "condition";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->condition, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("command");
+	key.val.string.val = "command";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->command, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		key.type = jbvString;
+		key.val.string.len = strlen("elsebranch");
+		key.val.string.val = "elsebranch";
+
+		pushJsonbValue(&jsonbState, WJB_KEY, &key);
+		value = TSMapElementToJsonbValue(caseObject->elsebranch, jsonbState);
+
+		if (value && IsAJsonbScalar(value))
+			pushJsonbValue(&jsonbState, WJB_VALUE, value);
+	}
+
+	key.type = jbvString;
+	key.val.string.len = strlen("match");
+	key.val.string.val = "match";
+
+	value = IntToJsonbValue(caseObject->match ? 1 : 0);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS KEEP command into JsonbValue
+ */
+static JsonbValue *
+TSMapKeepToJsonbValue(JsonbParseState *jsonbState)
+{
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	value->type = jbvString;
+	value->val.string.len = strlen("keep");
+	value->val.string.val = "keep";
+
+	return pushJsonbValue(&jsonbState, WJB_VALUE, value);
+}
+
+/*
+ * Convert a FTS element into JsonbValue. Common point for all types of TSMapElement
+ */
+JsonbValue *
+TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState)
+{
+	JsonbValue *result = NULL;
+
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_EXPRESSION:
+				result = TSMapExpressionToJsonbValue(element->value.objectExpression, jsonbState);
+				break;
+			case TSMAP_DICTIONARY:
+				result = IntToJsonbValue(element->value.objectDictionary);
+				break;
+			case TSMAP_CASE:
+				result = TSMapCaseToJsonbValue(element->value.objectCase, jsonbState);
+				break;
+			case TSMAP_KEEP:
+				result = TSMapKeepToJsonbValue(jsonbState);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Required text search configuration contains elements with invalid type.")));
+				break;
+		}
+	}
+	return result;
+}
+
+/*
+ * Convert a FTS configuration into JSONB
+ */
+Jsonb *
+TSMapToJsonb(TSMapElement *element)
+{
+	JsonbParseState *jsonbState = NULL;
+	JsonbValue *out;
+	Jsonb	   *result;
+
+	out = TSMapElementToJsonbValue(element, jsonbState);
+
+	result = JsonbValueToJsonb(out);
+	return result;
+}
+
+/* ----------------
+ * Functions used to get TSMap structure from JSONB representation
+ * ----------------
+ */
+
+/*
+ * Extract an integer from JsonbValue
+ */
+static int
+JsonbValueToInt(JsonbValue *value)
+{
+	char	   *str;
+
+	str = DatumGetCString(DirectFunctionCall1(numeric_out, NumericGetDatum(value->val.numeric)));
+	return pg_atoi(str, sizeof(int), 0);
+}
+
+/*
+ * Check is a key one of FTS configuration case fields
+ */
+static bool
+IsTSMapCaseKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "match") == 0 || strcmp(key, "condition") == 0 || strcmp(key, "command") == 0 || strcmp(key, "elsebranch") == 0;
+}
+
+/*
+ * Check is a key one of FTS configuration expression fields
+ */
+static bool
+IsTSMapExpressionKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "operator") == 0 || strcmp(key, "left") == 0 || strcmp(key, "right") == 0;
+}
+
+/*
+ * Configure parseData->element according to value (key)
+ */
+static void
+JsonbBeginObjectKey(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *parentElement = parseData->element;
+
+	parseData->element = palloc0(sizeof(TSMapElement));
+	parseData->element->parent = parentElement;
+
+	/* Overwrite object-type state based on key */
+	if (IsTSMapExpressionKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_EXPRESSION;
+		parseData->element->type = TSMAP_EXPRESSION;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapExpression));
+	}
+	else if (IsTSMapCaseKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CASE;
+		parseData->element->type = TSMAP_CASE;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapCase));
+	}
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration expression
+ */
+static void
+JsonbKeyExpressionProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "operator") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_OPERATOR;
+	else if (strcmp(key, "left") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_LEFT;
+	else if (strcmp(key, "right") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_RIGHT;
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration case
+ */
+static void
+JsonbKeyCaseProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "condition") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CONDITION;
+	else if (strcmp(key, "command") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_COMMAND;
+	else if (strcmp(key, "elsebranch") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_ELSEBRANCH;
+	else if (strcmp(key, "match") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_MATCH;
+}
+
+/*
+ * Convert a JsonbValue into OID TSMapElement
+ */
+static TSMapElement *
+JsonbValueToOidElement(JsonbValue *value, TSMapElement *parent)
+{
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	element->type = TSMAP_DICTIONARY;
+	element->value.objectDictionary = JsonbValueToInt(value);
+	return element;
+}
+
+/*
+ * Convert a JsonbValue into string TSMapElement.
+ * Used for special values such as KEEP command
+ */
+static TSMapElement *
+JsonbValueReadString(JsonbValue *value, TSMapElement *parent)
+{
+	char	   *str;
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	str = palloc0(sizeof(char) * (value->val.string.len + 1));
+	memcpy(str, value->val.string.val, sizeof(char) * value->val.string.len);
+
+	if (strcmp(str, "keep") == 0)
+		element->type = TSMAP_KEEP;
+
+	pfree(str);
+
+	return element;
+}
+
+/*
+ * Process a JsonbValue object
+ */
+static void
+JsonbProcessElement(JsonbIteratorToken r, JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *element = NULL;
+
+	switch (r)
+	{
+		case WJB_KEY:
+
+			/*
+			 * Construct an TSMapElement object. At first key inside JSONB
+			 * object a type is selected based on key.
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMPLEX_OBJ)
+				JsonbBeginObjectKey(value, parseData);
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_EXPRESSION)
+				JsonbKeyExpressionProcessing(value, parseData);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CASE)
+				JsonbKeyCaseProcessing(value, parseData);
+
+			break;
+		case WJB_BEGIN_OBJECT:
+
+			/*
+			 * Begin construction of new object
+			 */
+			parseData->statesIndex++;
+			parseData->states[parseData->statesIndex] = TSMPS_READ_COMPLEX_OBJ;
+			break;
+		case WJB_END_OBJECT:
+
+			/*
+			 * Save constructed object based on current state of parser
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->parent->value.objectExpression->left = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->parent->value.objectExpression->right = parseData->element;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->parent->value.objectCase->condition = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->parent->value.objectCase->command = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->parent->value.objectCase->elsebranch = parseData->element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_VALUE:
+
+			/*
+			 * Save a value inside constructing object
+			 */
+			if (value.type == jbvBinary)
+				element = JsonbToTSMapElement(value.val.binary.data);
+			else if (value.type == jbvString)
+				element = JsonbValueReadString(&value, parseData->element);
+			else if (value.type == jbvNumeric)
+				element = JsonbValueToOidElement(&value, parseData->element);
+			else
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains object with invalid type.")));
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->value.objectCase->condition = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->value.objectCase->command = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->value.objectCase->elsebranch = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_MATCH)
+				parseData->element->value.objectCase->match = JsonbValueToInt(&value) == 1 ? true : false;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_OPERATOR)
+				parseData->element->value.objectExpression->operator = JsonbValueToInt(&value);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->value.objectExpression->left = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->value.objectExpression->right = element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_ELEM:
+
+			/*
+			 * Store a simple element such as dictionary OID
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_WAIT_ELEMENT)
+			{
+				if (parseData->element != NULL)
+					parseData->element = JsonbValueToOidElement(&value, parseData->element->parent);
+				else
+					parseData->element = JsonbValueToOidElement(&value, NULL);
+			}
+			break;
+		default:
+			/* Ignore unused JSONB tokens */
+			break;
+	}
+}
+
+/*
+ * Convert a JsonbContainer into TSMapElement
+ */
+static TSMapElement *
+JsonbToTSMapElement(JsonbContainer *root)
+{
+	TSMapJsonbParseData parseData;
+	JsonbIteratorToken r;
+	JsonbIterator *it;
+	JsonbValue	val;
+
+	parseData.statesIndex = 0;
+	parseData.states[parseData.statesIndex] = TSMPS_WAIT_ELEMENT;
+	parseData.element = NULL;
+
+	it = JsonbIteratorInit(root);
+
+	while ((r = JsonbIteratorNext(&it, &val, true)) != WJB_DONE)
+		JsonbProcessElement(r, val, &parseData);
+
+	return parseData.element;
+}
+
+/*
+ * Convert a JSONB into TSMapElement
+ */
+TSMapElement *
+JsonbToTSMap(Jsonb *json)
+{
+	JsonbContainer *root = &json->root;
+
+	return JsonbToTSMapElement(root);
+}
+
+/* ----------------
+ * Text Search Configuration Map Utils
+ * ----------------
+ */
+
+/*
+ * Dynamically extendable list of OIDs
+ */
+typedef struct OidList
+{
+	Oid		   *data;
+	int			size;			/* Size of data array. Uninitialized elements
+								 * in data filled with InvalidOid */
+} OidList;
+
+/*
+ * Initialize a list
+ */
+static OidList *
+OidListInit()
+{
+	OidList    *result = palloc0(sizeof(OidList));
+
+	result->size = 1;
+	result->data = palloc0(result->size * sizeof(Oid));
+	result->data[0] = InvalidOid;
+	return result;
+}
+
+/*
+ * Add a new OID into list. If it is already stored in list, it won't be add second time.
+ */
+static void
+OidListAdd(OidList *list, Oid oid)
+{
+	int			i;
+
+	/* Search for the Oid in the list */
+	for (i = 0; list->data[i] != InvalidOid; i++)
+		if (list->data[i] == oid)
+			return;
+
+	/* If not found, insert it in the end of the list */
+	if (i >= list->size - 1)
+	{
+		int			j;
+
+		list->size = list->size * 2;
+		list->data = repalloc(list->data, sizeof(Oid) * list->size);
+
+		for (j = i; j < list->size; j++)
+			list->data[j] = InvalidOid;
+	}
+	list->data[i] = oid;
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement.
+ * Used for internal recursive calls.
+ */
+static void
+TSMapGetDictionariesInternal(TSMapElement *config, OidList *list)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapGetDictionariesInternal(config->value.objectExpression->left, list);
+			TSMapGetDictionariesInternal(config->value.objectExpression->right, list);
+			break;
+		case TSMAP_CASE:
+			TSMapGetDictionariesInternal(config->value.objectCase->command, list);
+			TSMapGetDictionariesInternal(config->value.objectCase->condition, list);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapGetDictionariesInternal(config->value.objectCase->elsebranch, list);
+			break;
+		case TSMAP_DICTIONARY:
+			OidListAdd(list, config->value.objectDictionary);
+			break;
+	}
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement
+ */
+Oid *
+TSMapGetDictionaries(TSMapElement *config)
+{
+	Oid		   *result;
+	OidList    *list = OidListInit();
+
+	TSMapGetDictionariesInternal(config, list);
+
+	result = list->data;
+	pfree(list);
+
+	return result;
+}
+
+/*
+ * Replace one dictionary OID with another in all instances inside a configuration
+ */
+void
+TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapReplaceDictionary(config->value.objectExpression->left, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectExpression->right, oldDict, newDict);
+			break;
+		case TSMAP_CASE:
+			TSMapReplaceDictionary(config->value.objectCase->command, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectCase->condition, oldDict, newDict);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapReplaceDictionary(config->value.objectCase->elsebranch, oldDict, newDict);
+			break;
+		case TSMAP_DICTIONARY:
+			if (config->value.objectDictionary == oldDict)
+				config->value.objectDictionary = newDict;
+			break;
+	}
+}
+
+/* ----------------
+ * Text Search Configuration Map Memory Management
+ * ----------------
+ */
+
+/*
+ * Move a FTS configuration expression to another memory context
+ */
+static TSMapElement *
+TSMapExpressionMoveToMemoryContext(TSMapExpression *expression, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapExpression *resultExpression = MemoryContextAlloc(context, sizeof(TSMapExpression));
+
+	memset(resultExpression, 0, sizeof(TSMapExpression));
+	result->value.objectExpression = resultExpression;
+	result->type = TSMAP_EXPRESSION;
+
+	resultExpression->operator = expression->operator;
+
+	resultExpression->left = TSMapMoveToMemoryContext(expression->left, context);
+	resultExpression->left->parent = result;
+
+	resultExpression->right = TSMapMoveToMemoryContext(expression->right, context);
+	resultExpression->right->parent = result;
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration case to another memory context
+ */
+static TSMapElement *
+TSMapCaseMoveToMemoryContext(TSMapCase *caseObject, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapCase  *resultCaseObject = MemoryContextAlloc(context, sizeof(TSMapCase));
+
+	memset(resultCaseObject, 0, sizeof(TSMapCase));
+	result->value.objectCase = resultCaseObject;
+	result->type = TSMAP_CASE;
+
+	resultCaseObject->match = caseObject->match;
+
+	resultCaseObject->command = TSMapMoveToMemoryContext(caseObject->command, context);
+	resultCaseObject->command->parent = result;
+
+	resultCaseObject->condition = TSMapMoveToMemoryContext(caseObject->condition, context);
+	resultCaseObject->condition->parent = result;
+
+	if (caseObject->elsebranch != NULL)
+	{
+		resultCaseObject->elsebranch = TSMapMoveToMemoryContext(caseObject->elsebranch, context);
+		resultCaseObject->elsebranch->parent = result;
+	}
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration to another memory context
+ */
+TSMapElement *
+TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context)
+{
+	TSMapElement *result = NULL;
+
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			result = TSMapExpressionMoveToMemoryContext(config->value.objectExpression, context);
+			break;
+		case TSMAP_CASE:
+			result = TSMapCaseMoveToMemoryContext(config->value.objectCase, context);
+			break;
+		case TSMAP_DICTIONARY:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_DICTIONARY;
+			result->value.objectDictionary = config->value.objectDictionary;
+			break;
+		case TSMAP_KEEP:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_KEEP;
+			result->value.object = NULL;
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains object with invalid type.")));
+			break;
+	}
+
+	return result;
+}
+
+/*
+ * Free memory occupied by FTS configuration expression
+ */
+static void
+TSMapExpressionFree(TSMapExpression *expression)
+{
+	if (expression->left)
+		TSMapElementFree(expression->left);
+	if (expression->right)
+		TSMapElementFree(expression->right);
+	pfree(expression);
+}
+
+/*
+ * Free memory occupied by FTS configuration case
+ */
+static void
+TSMapCaseFree(TSMapCase *caseObject)
+{
+	TSMapElementFree(caseObject->condition);
+	TSMapElementFree(caseObject->command);
+	TSMapElementFree(caseObject->elsebranch);
+	pfree(caseObject);
+}
+
+/*
+ * Free memory occupied by FTS configuration element
+ */
+void
+TSMapElementFree(TSMapElement *element)
+{
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_CASE:
+				TSMapCaseFree(element->value.objectCase);
+				break;
+			case TSMAP_EXPRESSION:
+				TSMapExpressionFree(element->value.objectExpression);
+				break;
+		}
+		pfree(element);
+	}
+}
+
+/*
+ * Do a deep comparison of two TSMapElements. Doesn't check parents of elements
+ */
+bool
+TSMapElementEquals(TSMapElement *a, TSMapElement *b)
+{
+	bool		result = true;
+
+	if (a->type == b->type)
+	{
+		switch (a->type)
+		{
+			case TSMAP_CASE:
+				if (!TSMapElementEquals(a->value.objectCase->condition, b->value.objectCase->condition))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectCase->command, b->value.objectCase->command))
+					result = false;
+
+				if (a->value.objectCase->elsebranch != NULL && b->value.objectCase->elsebranch != NULL)
+				{
+					if (!TSMapElementEquals(a->value.objectCase->elsebranch, b->value.objectCase->elsebranch))
+						result = false;
+				}
+				else if (a->value.objectCase->elsebranch != NULL || b->value.objectCase->elsebranch != NULL)
+					result = false;
+
+				if (a->value.objectCase->match != b->value.objectCase->match)
+					result = false;
+				break;
+			case TSMAP_EXPRESSION:
+				if (!TSMapElementEquals(a->value.objectExpression->left, b->value.objectExpression->left))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectExpression->right, b->value.objectExpression->right))
+					result = false;
+				if (a->value.objectExpression->operator != b->value.objectExpression->operator)
+					result = false;
+				break;
+			case TSMAP_DICTIONARY:
+				result = a->value.objectDictionary == b->value.objectDictionary;
+				break;
+			case TSMAP_KEEP:
+				result = true;
+		}
+	}
+	else
+		result = false;
+
+	return result;
+}
diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c
index 7b69ef5..f476abb 100644
--- a/src/backend/tsearch/ts_parse.c
+++ b/src/backend/tsearch/ts_parse.c
@@ -16,58 +16,157 @@
 
 #include "tsearch/ts_cache.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "funcapi.h"
 
 #define IGNORE_LONGLEXEME	1
 
-/*
+/*-------------------
  * Lexize subsystem
+ *-------------------
  */
 
+/*
+ * Representation of token produced by FTS parser. It contains intermediate
+ * lexemes in case of phrase dictionary processing.
+ */
 typedef struct ParsedLex
 {
-	int			type;
-	char	   *lemm;
-	int			lenlemm;
-	struct ParsedLex *next;
+	int			type;			/* Token type */
+	char	   *lemm;			/* Token itself */
+	int			lenlemm;		/* Length of the token string */
+	int			maplen;			/* Length of the map */
+	bool	   *accepted;		/* Is accepted by some dictionary */
+	bool	   *rejected;		/* Is rejected by all dictionaries */
+	bool	   *notFinished;	/* Some dictionary not finished processing and
+								 * waits for more tokens */
+	struct ParsedLex *next;		/* Next token in the list */
+	TSMapElement *relatedRule;	/* Rule which is used to produce lexemes from
+								 * the token */
 } ParsedLex;
 
+/*
+ * List of tokens produced by FTS parser.
+ */
 typedef struct ListParsedLex
 {
 	ParsedLex  *head;
 	ParsedLex  *tail;
 } ListParsedLex;
 
-typedef struct
+/*
+ * Dictionary state shared between processing of different tokens
+ */
+typedef struct DictState
 {
-	TSConfigCacheEntry *cfg;
-	Oid			curDictId;
-	int			posDict;
-	DictSubState dictState;
-	ParsedLex  *curSub;
-	ListParsedLex towork;		/* current list to work */
-	ListParsedLex waste;		/* list of lexemes that already lexized */
+	Oid			relatedDictionary;	/* DictState contains state of dictionary
+									 * with this Oid */
+	DictSubState subState;		/* Internal state of the dictionary used to
+								 * store some state between dictionary calls */
+	ListParsedLex acceptedTokens;	/* Tokens which are processed and
+									 * accepted, used in last returned result
+									 * by the dictionary */
+	ListParsedLex intermediateTokens;	/* Tokens which are not accepted, but
+										 * were processed by thesaurus-like
+										 * dictionary */
+	bool		storeToAccepted;	/* Should current token be appended to
+									 * accepted or intermediate tokens */
+	bool		processed;		/* Is the dictionary take control during
+								 * current token processing */
+	TSLexeme   *tmpResult;		/* Last result returned by thesaurus-like
+								 * dictionary, if dictionary still waiting for
+								 * more lexemes */
+} DictState;
 
-	/*
-	 * fields to store last variant to lexize (basically, thesaurus or similar
-	 * to, which wants	several lexemes
-	 */
+/*
+ * List of dictionary states
+ */
+typedef struct DictStateList
+{
+	int			listLength;
+	DictState  *states;
+} DictStateList;
 
-	ParsedLex  *lastRes;
-	TSLexeme   *tmpRes;
+/*
+ * Buffer entry with lexemes produced from current token
+ */
+typedef struct LexemesBufferEntry
+{
+	TSMapElement *key;	/* Element of the mapping configuration produced the entry */
+	ParsedLex  *token;	/* Token used for production of the lexemes */
+	TSLexeme   *data;	/* Lexemes produced from current token */
+} LexemesBufferEntry;
+
+/*
+ * Buffer with lexemes produced from current token
+ */
+typedef struct LexemesBuffer
+{
+	int			size;
+	LexemesBufferEntry *data;
+} LexemesBuffer;
+
+/*
+ * Storage for accepted and possible accepted lexemes
+ */
+typedef struct ResultStorage
+{
+	TSLexeme   *lexemes;		/* Processed lexemes, which is not yet
+								 * accepted */
+	TSLexeme   *accepted;		/* Already accepted lexemes */
+} ResultStorage;
+
+/*
+ * FTS processing context
+ */
+typedef struct LexizeData
+{
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	DictStateList dslist;		/* List of all currently stored states of
+								 * dictionaries */
+	ListParsedLex towork;		/* Current list to work */
+	ListParsedLex waste;		/* List of lexemes that already lexized */
+	LexemesBuffer buffer;		/* Buffer of processed lexemes. Used to avoid
+								 * multiple execution of token lexize process
+								 * with same parameters */
+	ResultStorage delayedResults;	/* Results that should be returned but may
+									 * be rejected in future */
+	Oid			skipDictionary; /* The dictionary we should skip during
+								 * processing. Used to avoid infinite loop in
+								 * configuration with phrase dictionary */
+	bool		debugContext;	/* If true, relatedRule attribute is filled */
 } LexizeData;
 
-static void
-LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+/*
+ * FTS processing debug context. Used during ts_debug calls.
+ */
+typedef struct TSDebugContext
 {
-	ld->cfg = cfg;
-	ld->curDictId = InvalidOid;
-	ld->posDict = 0;
-	ld->towork.head = ld->towork.tail = ld->curSub = NULL;
-	ld->waste.head = ld->waste.tail = NULL;
-	ld->lastRes = NULL;
-	ld->tmpRes = NULL;
-}
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	TSParserCacheEntry *prsobj; /* Parser context of current ts_debug context */
+	LexDescr   *tokenTypes;		/* Token types supported by current parser */
+	void	   *prsdata;		/* Parser data of current ts_debug context */
+	LexizeData	ldata;			/* Lexize data of current ts_debug context */
+	int			tokentype;		/* Last token tokentype */
+	TSLexeme   *savedLexemes;	/* Last token lexemes stored for ts_debug
+								 * output */
+	ParsedLex  *leftTokens;		/* Corresponded ParsedLex */
+} TSDebugContext;
+
+static TSLexeme *TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression);
+static TSLexeme *LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config);
+
+/*-------------------
+ * ListParsedLex API
+ *-------------------
+ */
 
+/*
+ * Add a ParsedLex to the end of the list
+ */
 static void
 LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 {
@@ -81,274 +180,1291 @@ LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 	newpl->next = NULL;
 }
 
-static ParsedLex *
-LPLRemoveHead(ListParsedLex *list)
-{
-	ParsedLex  *res = list->head;
+/*
+ * Add a copy of ParsedLex to the end of the list
+ */
+static void
+LPLAddTailCopy(ListParsedLex *list, ParsedLex *newpl)
+{
+	ParsedLex  *copy = palloc0(sizeof(ParsedLex));
+
+	copy->lenlemm = newpl->lenlemm;
+	copy->type = newpl->type;
+	copy->lemm = newpl->lemm;
+	copy->relatedRule = newpl->relatedRule;
+	copy->next = NULL;
+
+	if (list->tail)
+	{
+		list->tail->next = copy;
+		list->tail = copy;
+	}
+	else
+		list->head = list->tail = copy;
+}
+
+/*
+ * Remove the head of the list. Return pointer to detached head
+ */
+static ParsedLex *
+LPLRemoveHead(ListParsedLex *list)
+{
+	ParsedLex  *res = list->head;
+
+	if (list->head)
+		list->head = list->head->next;
+
+	if (list->head == NULL)
+		list->tail = NULL;
+
+	return res;
+}
+
+/*
+ * Remove all ParsedLex from the list
+ */
+static void
+LPLClear(ListParsedLex *list)
+{
+	ParsedLex  *tmp,
+			   *ptr = list->head;
+
+	while (ptr)
+	{
+		tmp = ptr->next;
+		pfree(ptr);
+		ptr = tmp;
+	}
+
+	list->head = list->tail = NULL;
+}
+
+/*-------------------
+ * LexizeData manipulation functions
+ *-------------------
+ */
+
+/*
+ * Initialize empty LexizeData object
+ */
+static void
+LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+{
+	ld->cfg = cfg;
+	ld->skipDictionary = InvalidOid;
+	ld->towork.head = ld->towork.tail = NULL;
+	ld->waste.head = ld->waste.tail = NULL;
+	ld->dslist.listLength = 0;
+	ld->dslist.states = NULL;
+	ld->buffer.size = 0;
+	ld->buffer.data = NULL;
+	ld->delayedResults.lexemes = NULL;
+	ld->delayedResults.accepted = NULL;
+}
+
+/*
+ * Add a token to the processing queue
+ */
+static void
+LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
+{
+	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+
+	newpl->type = type;
+	newpl->lemm = lemm;
+	newpl->lenlemm = lenlemm;
+	newpl->relatedRule = NULL;
+	LPLAddTail(&ld->towork, newpl);
+}
+
+/*
+ * Remove head of the processing queue
+ */
+static void
+RemoveHead(LexizeData *ld)
+{
+	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+}
+
+/*
+ * Set token corresponded to current lexeme
+ */
+static void
+setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+{
+	if (correspondLexem)
+		*correspondLexem = ld->waste.head;
+	else
+		LPLClear(&ld->waste);
+
+	ld->waste.head = ld->waste.tail = NULL;
+}
+
+/*-------------------
+ * DictState manipulation functions
+ *-------------------
+ */
+
+/*
+ * Get a state of dictionary based on its OID
+ */
+static DictState *
+DictStateListGet(DictStateList *list, Oid dictId)
+{
+	int			i;
+	DictState  *result = NULL;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			result = &list->states[i];
+
+	return result;
+}
+
+/*
+ * Remove a state of dictionary based on its OID
+ */
+static void
+DictStateListRemove(DictStateList *list, Oid dictId)
+{
+	int			i;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			break;
+
+	if (i != list->listLength)
+	{
+		memcpy(list->states + i, list->states + i + 1, sizeof(DictState) * (list->listLength - i - 1));
+		list->listLength--;
+		if (list->listLength == 0)
+			list->states = NULL;
+		else
+			list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	}
+}
+
+/*
+ * Insert a state of dictionary with specified OID
+ */
+static DictState *
+DictStateListAdd(DictStateList *list, DictState *state)
+{
+	DictStateListRemove(list, state->relatedDictionary);
+
+	list->listLength++;
+	if (list->states)
+		list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	else
+		list->states = palloc0(sizeof(DictState) * list->listLength);
+
+	memcpy(list->states + list->listLength - 1, state, sizeof(DictState));
+
+	return list->states + list->listLength - 1;
+}
+
+/*
+ * Remove states of all dictionaries
+ */
+static void
+DictStateListClear(DictStateList *list)
+{
+	list->listLength = 0;
+	if (list->states)
+		pfree(list->states);
+	list->states = NULL;
+}
+
+/*-------------------
+ * LexemesBuffer manipulation functions
+ *-------------------
+ */
+
+/*
+ * Check if there is a saved lexeme generated by specified TSMapElement
+ */
+static bool
+LexemesBufferContains(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			return true;
+
+	return false;
+}
+
+/*
+ * Get a saved lexeme generated by specified TSMapElement
+ */
+static TSLexeme *
+LexemesBufferGet(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+	TSLexeme   *result = NULL;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			result = buffer->data[i].data;
+
+	return result;
+}
+
+/*
+ * Remove a saved lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferRemove(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			break;
+
+	if (i != buffer->size)
+	{
+		memcpy(buffer->data + i, buffer->data + i + 1, sizeof(LexemesBufferEntry) * (buffer->size - i - 1));
+		buffer->size--;
+		if (buffer->size == 0)
+			buffer->data = NULL;
+		else
+			buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	}
+}
+
+/*
+ * Same a lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferAdd(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token, TSLexeme *data)
+{
+	LexemesBufferRemove(buffer, key, token);
+
+	buffer->size++;
+	if (buffer->data)
+		buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	else
+		buffer->data = palloc0(sizeof(LexemesBufferEntry) * buffer->size);
+
+	buffer->data[buffer->size - 1].token = token;
+	buffer->data[buffer->size - 1].key = key;
+	buffer->data[buffer->size - 1].data = data;
+}
+
+/*
+ * Remove all lexemes saved in a buffer
+ */
+static void
+LexemesBufferClear(LexemesBuffer *buffer)
+{
+	int			i;
+	bool	   *skipEntry = palloc0(sizeof(bool) * buffer->size);
+
+	for (i = 0; i < buffer->size; i++)
+	{
+		if (buffer->data[i].data != NULL && !skipEntry[i])
+		{
+			int			j;
+
+			for (j = 0; j < buffer->size; j++)
+				if (buffer->data[i].data == buffer->data[j].data)
+					skipEntry[j] = true;
+
+			pfree(buffer->data[i].data);
+		}
+	}
+
+	buffer->size = 0;
+	if (buffer->data)
+		pfree(buffer->data);
+	buffer->data = NULL;
+}
+
+/*-------------------
+ * TSLexeme util functions
+ *-------------------
+ */
+
+/*
+ * Get size of TSLexeme except empty-lexeme
+ */
+static int
+TSLexemeGetSize(TSLexeme *lex)
+{
+	int			result = 0;
+	TSLexeme   *ptr = lex;
+
+	while (ptr && ptr->lexeme)
+	{
+		result++;
+		ptr++;
+	}
+
+	return result;
+}
+
+/*
+ * Remove repeated lexemes. Also remove copies of whole nvariant groups.
+ */
+static TSLexeme *
+TSLexemeRemoveDuplications(TSLexeme *lexeme)
+{
+	TSLexeme   *res;
+	int			curLexIndex;
+	int			i;
+	int			lexemeSize = TSLexemeGetSize(lexeme);
+	int			shouldCopyCount = lexemeSize;
+	bool	   *shouldCopy;
+
+	if (lexeme == NULL)
+		return NULL;
+
+	shouldCopy = palloc(sizeof(bool) * lexemeSize);
+	memset(shouldCopy, true, sizeof(bool) * lexemeSize);
+
+	for (curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		for (i = curLexIndex + 1; i < lexemeSize; i++)
+		{
+			if (!shouldCopy[i])
+				continue;
+
+			if (strcmp(lexeme[curLexIndex].lexeme, lexeme[i].lexeme) == 0)
+			{
+				if (lexeme[curLexIndex].nvariant == lexeme[i].nvariant)
+				{
+					shouldCopy[i] = false;
+					shouldCopyCount--;
+					continue;
+				}
+				else
+				{
+					/*
+					 * Check for same set of lexemes in another nvariant
+					 * series
+					 */
+					int			nvariantCountL = 0;
+					int			nvariantCountR = 0;
+					int			nvariantOverlap = 1;
+					int			j;
+
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[curLexIndex].nvariant == lexeme[j].nvariant)
+							nvariantCountL++;
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[i].nvariant == lexeme[j].nvariant)
+							nvariantCountR++;
+
+					if (nvariantCountL != nvariantCountR)
+						continue;
+
+					for (j = 1; j < nvariantCountR; j++)
+					{
+						if (strcmp(lexeme[curLexIndex + j].lexeme, lexeme[i + j].lexeme) == 0
+							&& lexeme[curLexIndex + j].nvariant == lexeme[i + j].nvariant)
+							nvariantOverlap++;
+					}
+
+					if (nvariantOverlap != nvariantCountR)
+						continue;
+
+					for (j = 0; j < nvariantCountR; j++)
+						shouldCopy[i + j] = false;
+				}
+			}
+		}
+	}
+
+	res = palloc0(sizeof(TSLexeme) * (shouldCopyCount + 1));
+
+	for (i = 0, curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		if (shouldCopy[curLexIndex])
+		{
+			memcpy(res + i, lexeme + curLexIndex, sizeof(TSLexeme));
+			i++;
+		}
+	}
+
+	pfree(shouldCopy);
+	return res;
+}
+
+/*
+ * Combine two lexeme lists with respect to positions
+ */
+static TSLexeme *
+TSLexemeMergePositions(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+
+	if (left != NULL || right != NULL)
+	{
+		int			left_i = 0;
+		int			right_i = 0;
+		int			left_max_nvariant = 0;
+		int			i;
+		int			left_size = TSLexemeGetSize(left);
+		int			right_size = TSLexemeGetSize(right);
+
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		for (i = 0; i < right_size; i++)
+			right[i].nvariant += left_max_nvariant;
+		if (right && right[0].flags & TSL_ADDPOS)
+			right[0].flags &= ~TSL_ADDPOS;
+
+		i = 0;
+		while (i < left_size + right_size)
+		{
+			if (left_i < left_size)
+			{
+				do
+				{
+					result[i++] = left[left_i++];
+				} while (left && left[left_i].lexeme && (left[left_i].flags & TSL_ADDPOS) == 0);
+			}
+
+			if (right_i < right_size)
+			{
+				do
+				{
+					result[i++] = right[right_i++];
+				} while (right && right[right_i].lexeme && (right[right_i].flags & TSL_ADDPOS) == 0);
+			}
+		}
+	}
+	return result;
+}
+
+/*
+ * Split lexemes generated by regular dictionaries and multi-input dictionaries
+ * and combine them with respect to positions
+ */
+static TSLexeme *
+TSLexemeFilterMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *result;
+	TSLexeme   *ptr = lexemes;
+	int			multi_lexemes = 0;
+
+	while (ptr && ptr->lexeme)
+	{
+		if (ptr->flags & TSL_MULTI)
+			multi_lexemes++;
+		ptr++;
+	}
+
+	if (multi_lexemes > 0)
+	{
+		TSLexeme   *lexemes_multi = palloc0(sizeof(TSLexeme) * (multi_lexemes + 1));
+		TSLexeme   *lexemes_rest = palloc0(sizeof(TSLexeme) * (TSLexemeGetSize(lexemes) - multi_lexemes + 1));
+		int			rest_i = 0;
+		int			multi_i = 0;
+
+		ptr = lexemes;
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr->flags & TSL_MULTI)
+				lexemes_multi[multi_i++] = *ptr;
+			else
+				lexemes_rest[rest_i++] = *ptr;
+
+			ptr++;
+		}
+		result = TSLexemeMergePositions(lexemes_rest, lexemes_multi);
+	}
+	else
+	{
+		result = TSLexemeMergePositions(lexemes, NULL);
+	}
+
+	return result;
+}
+
+/*
+ * Mark lexemes as generated by multi-input (thesaurus-like) dictionary
+ */
+static void
+TSLexemeMarkMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *ptr = lexemes;
+
+	while (ptr && ptr->lexeme)
+	{
+		ptr->flags |= TSL_MULTI;
+		ptr++;
+	}
+}
+
+/*-------------------
+ * Lexemes set operations
+ *-------------------
+ */
+
+/*
+ * Combine left and right lexeme lists into one.
+ * If append is true, right lexemes added after last left lexeme with TSL_ADDPOS flag
+ */
+static TSLexeme *
+TSLexemeUnionOpt(TSLexeme *left, TSLexeme *right, bool append)
+{
+	TSLexeme   *result;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+	int			left_max_nvariant = 0;
+	int			i;
+
+	if (left == NULL && right == NULL)
+	{
+		result = NULL;
+	}
+	else
+	{
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		if (left_size > 0)
+			memcpy(result, left, sizeof(TSLexeme) * left_size);
+		if (right_size > 0)
+			memcpy(result + left_size, right, sizeof(TSLexeme) * right_size);
+		if (append && left_size > 0 && right_size > 0)
+			result[left_size].flags |= TSL_ADDPOS;
+
+		for (i = left_size; i < left_size + right_size; i++)
+			result[i].nvariant += left_max_nvariant;
+	}
+
+	return result;
+}
+
+/*
+ * Combine left and right lexeme lists into one
+ */
+static TSLexeme *
+TSLexemeUnion(TSLexeme *left, TSLexeme *right)
+{
+	return TSLexemeUnionOpt(left, right, false);
+}
+
+/*
+ * Remove common lexemes and return only which is stored in left list
+ */
+static TSLexeme *
+TSLexemeExcept(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (!found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*
+ * Keep only common lexemes
+ */
+static TSLexeme *
+TSLexemeIntersect(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*-------------------
+ * Result storage functions
+ *-------------------
+ */
+
+/*
+ * Add a lexeme to the result storage
+ */
+static void
+ResultStorageAdd(ResultStorage *storage, ParsedLex *token, TSLexeme *lexs)
+{
+	TSLexeme   *oldLexs = storage->lexemes;
+
+	storage->lexemes = TSLexemeUnionOpt(storage->lexemes, lexs, true);
+	if (oldLexs)
+		pfree(oldLexs);
+}
+
+/*
+ * Move all saved lexemes to accepted list
+ */
+static void
+ResultStorageMoveToAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+	{
+		TSLexeme   *prevAccepted = storage->accepted;
+
+		storage->accepted = TSLexemeUnionOpt(storage->accepted, storage->lexemes, true);
+		if (prevAccepted)
+			pfree(prevAccepted);
+		if (storage->lexemes)
+			pfree(storage->lexemes);
+	}
+	else
+	{
+		storage->accepted = storage->lexemes;
+	}
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all non-accepted lexemes
+ */
+static void
+ResultStorageClearLexemes(ResultStorage *storage)
+{
+	if (storage->lexemes)
+		pfree(storage->lexemes);
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all accepted lexemes
+ */
+static void
+ResultStorageClearAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+		pfree(storage->accepted);
+	storage->accepted = NULL;
+}
+
+/*-------------------
+ * Condition and command execution
+ *-------------------
+ */
+
+/*
+ * Process a token by the dictionary
+ */
+static TSLexeme *
+LexizeExecDictionary(LexizeData *ld, ParsedLex *token, TSMapElement *dictionary)
+{
+	TSLexeme   *res;
+	TSDictionaryCacheEntry *dict;
+	DictSubState subState;
+	Oid			dictId = dictionary->value.objectDictionary;
+
+	if (ld->skipDictionary == dictId)
+		return NULL;
+
+	if (LexemesBufferContains(&ld->buffer, dictionary, token))
+		res = LexemesBufferGet(&ld->buffer, dictionary, token);
+	else
+	{
+		char	   *curValLemm = token->lemm;
+		int			curValLenLemm = token->lenlemm;
+		DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+		dict = lookup_ts_dictionary_cache(dictId);
+
+		if (state)
+		{
+			subState = state->subState;
+			state->processed = true;
+		}
+		else
+		{
+			subState.isend = subState.getnext = false;
+			subState.private_state = NULL;
+		}
+
+		res = (TSLexeme *) DatumGetPointer(FunctionCall4(&(dict->lexize),
+														 PointerGetDatum(dict->dictData),
+														 PointerGetDatum(curValLemm),
+														 Int32GetDatum(curValLenLemm),
+														 PointerGetDatum(&subState)
+														 ));
+
+		if (subState.getnext)
+		{
+			/*
+			 * Dictionary wants next word, so store current context and state
+			 * in the DictStateList
+			 */
+			if (state == NULL)
+			{
+				state = palloc0(sizeof(DictState));
+				state->processed = true;
+				state->relatedDictionary = dictId;
+				state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				state->acceptedTokens.head = state->acceptedTokens.tail = NULL;
+				state->tmpResult = NULL;
+
+				/*
+				 * Add state to the list and update pointer in order to work
+				 * with copy from the list
+				 */
+				state = DictStateListAdd(&ld->dslist, state);
+			}
+
+			state->subState = subState;
+			state->storeToAccepted = res != NULL;
+
+			if (res)
+			{
+				if (state->intermediateTokens.head != NULL)
+				{
+					ParsedLex  *ptr = state->intermediateTokens.head;
+
+					while (ptr)
+					{
+						LPLAddTailCopy(&state->acceptedTokens, ptr);
+						ptr = ptr->next;
+					}
+					state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				}
+
+				if (state->tmpResult)
+					pfree(state->tmpResult);
+				TSLexemeMarkMulti(res);
+				state->tmpResult = res;
+				res = NULL;
+			}
+		}
+		else if (state != NULL)
+		{
+			if (res)
+			{
+				if (state)
+					TSLexemeMarkMulti(res);
+				DictStateListRemove(&ld->dslist, dictId);
+			}
+			else
+			{
+				/*
+				 * Trigger post-processing in order to check tmpResult and
+				 * restart processing (see LexizeExec function)
+				 */
+				state->processed = false;
+			}
+		}
+		LexemesBufferAdd(&ld->buffer, dictionary, token, res);
+	}
+
+	return res;
+}
+
+/*
+ * Check is dictionary waits for more tokens or not
+ */
+static bool
+LexizeExecDictionaryWaitNext(LexizeData *ld, Oid dictId)
+{
+	DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+	if (state)
+		return state->subState.getnext;
+	else
+		return false;
+}
+
+/*
+ * Check is dictionary result for current token is NULL or not.
+ * It dictionary waits for more lexemes, the result is interpreted as not null.
+ */
+static bool
+LexizeExecIsNull(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	bool		result = false;
+
+	if (config->type == TSMAP_EXPRESSION)
+	{
+		TSMapExpression *expression = config->value.objectExpression;
+
+		result = LexizeExecIsNull(ld, token, expression->left) || LexizeExecIsNull(ld, token, expression->right);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		Oid			dictOid = config->value.objectDictionary;
+		TSLexeme   *lexemes = LexizeExecDictionary(ld, token, config);
+
+		if (lexemes)
+			result = false;
+		else
+			result = !LexizeExecDictionaryWaitNext(ld, dictOid);
+	}
+	return result;
+}
+
+/*
+ * Execute a MAP operator
+ */
+static TSLexeme *
+TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression)
+{
+	TSLexeme   *left_res;
+	TSLexeme   *result = NULL;
+	int			left_size;
+	int			i;
+
+	left_res = LexizeExecTSElement(ld, token, expression->left);
+	left_size = TSLexemeGetSize(left_res);
+
+	if (left_res == NULL && LexizeExecIsNull(ld, token, expression->left))
+		result = LexizeExecTSElement(ld, token, expression->right);
+	else if (expression->operator == TSMAP_OP_COMMA &&
+			((left_res != NULL && (left_res->flags & TSL_FILTER) == 0) || left_res == NULL))
+		result = left_res;
+	else
+	{
+		TSMapElement *relatedRuleTmp = NULL;
+		relatedRuleTmp = palloc0(sizeof(TSMapElement));
+		relatedRuleTmp->parent = NULL;
+		relatedRuleTmp->type = TSMAP_EXPRESSION;
+		relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+		relatedRuleTmp->value.objectExpression->operator = expression->operator;
+		relatedRuleTmp->value.objectExpression->left = token->relatedRule;
+
+		for (i = 0; i < left_size; i++)
+		{
+			TSLexeme   *tmp_res = NULL;
+			TSLexeme   *prev_res;
+			ParsedLex	tmp_token;
+
+			tmp_token.lemm = left_res[i].lexeme;
+			tmp_token.lenlemm = strlen(left_res[i].lexeme);
+			tmp_token.type = token->type;
+			tmp_token.next = NULL;
+
+			tmp_res = LexizeExecTSElement(ld, &tmp_token, expression->right);
+			relatedRuleTmp->value.objectExpression->right = tmp_token.relatedRule;
+			prev_res = result;
+			result = TSLexemeUnion(prev_res, tmp_res);
+			if (prev_res)
+				pfree(prev_res);
+		}
+		token->relatedRule = relatedRuleTmp;
+	}
+
+	return result;
+}
+
+/*
+ * Execute a TSMapElement
+ * Common point of all possible types of TSMapElement
+ */
+static TSLexeme *
+LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	TSLexeme   *result = NULL;
+
+	if (LexemesBufferContains(&ld->buffer, config, token))
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexemesBufferGet(&ld->buffer, config, token);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexizeExecDictionary(ld, token, config);
+	}
+	else if (config->type == TSMAP_CASE)
+	{
+		TSMapCase  *caseObject = config->value.objectCase;
+		bool		conditionIsNull = LexizeExecIsNull(ld, token, caseObject->condition);
+
+		if ((!conditionIsNull && caseObject->match) || (conditionIsNull && !caseObject->match))
+		{
+			if (caseObject->command->type == TSMAP_KEEP)
+				result = LexizeExecTSElement(ld, token, caseObject->condition);
+			else
+				result = LexizeExecTSElement(ld, token, caseObject->command);
+		}
+		else if (caseObject->elsebranch)
+			result = LexizeExecTSElement(ld, token, caseObject->elsebranch);
+	}
+	else if (config->type == TSMAP_EXPRESSION)
+	{
+		TSLexeme   *resLeft = NULL;
+		TSLexeme   *resRight = NULL;
+		TSMapElement *relatedRuleTmp = NULL;
+		TSMapExpression *expression = config->value.objectExpression;
+
+		if (expression->operator != TSMAP_OP_MAP && expression->operator != TSMAP_OP_COMMA)
+		{
+			if (ld->debugContext)
+			{
+				relatedRuleTmp = palloc0(sizeof(TSMapElement));
+				relatedRuleTmp->parent = NULL;
+				relatedRuleTmp->type = TSMAP_EXPRESSION;
+				relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+				relatedRuleTmp->value.objectExpression->operator = expression->operator;
+			}
 
-	if (list->head)
-		list->head = list->head->next;
+			resLeft = LexizeExecTSElement(ld, token, expression->left);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->left = token->relatedRule;
 
-	if (list->head == NULL)
-		list->tail = NULL;
+			resRight = LexizeExecTSElement(ld, token, expression->right);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->right = token->relatedRule;
+		}
 
-	return res;
-}
+		switch (expression->operator)
+		{
+			case TSMAP_OP_UNION:
+				result = TSLexemeUnion(resLeft, resRight);
+				break;
+			case TSMAP_OP_EXCEPT:
+				result = TSLexemeExcept(resLeft, resRight);
+				break;
+			case TSMAP_OP_INTERSECT:
+				result = TSLexemeIntersect(resLeft, resRight);
+				break;
+			case TSMAP_OP_MAP:
+			case TSMAP_OP_COMMA:
+				result = TSLexemeMap(ld, token, expression);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains invalid expression operator.")));
+				break;
+		}
 
-static void
-LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
-{
-	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+		if (ld->debugContext && relatedRuleTmp != NULL)
+			token->relatedRule = relatedRuleTmp;
+	}
 
-	newpl->type = type;
-	newpl->lemm = lemm;
-	newpl->lenlemm = lenlemm;
-	LPLAddTail(&ld->towork, newpl);
-	ld->curSub = ld->towork.tail;
+	if (!LexemesBufferContains(&ld->buffer, config, token))
+		LexemesBufferAdd(&ld->buffer, config, token, result);
+
+	return result;
 }
 
-static void
-RemoveHead(LexizeData *ld)
+/*-------------------
+ * LexizeExec and helpers functions
+ *-------------------
+ */
+
+/*
+ * Processing of EOF-like token.
+ * Return all temporary results if any are saved.
+ */
+static TSLexeme *
+LexizeExecFinishProcessing(LexizeData *ld)
 {
-	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+	int			i;
+	TSLexeme   *res = NULL;
+
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		TSLexeme   *last_res = res;
 
-	ld->posDict = 0;
+		res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+		if (last_res)
+			pfree(last_res);
+	}
+
+	return res;
 }
 
-static void
-setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+/*
+ * Get last accepted result of the phrase-dictionary
+ */
+static TSLexeme *
+LexizeExecGetPreviousResults(LexizeData *ld)
 {
-	if (correspondLexem)
-	{
-		*correspondLexem = ld->waste.head;
-	}
-	else
-	{
-		ParsedLex  *tmp,
-				   *ptr = ld->waste.head;
+	int			i;
+	TSLexeme   *res = NULL;
 
-		while (ptr)
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		if (!ld->dslist.states[i].processed)
 		{
-			tmp = ptr->next;
-			pfree(ptr);
-			ptr = tmp;
+			TSLexeme   *last_res = res;
+
+			res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+			if (last_res)
+				pfree(last_res);
 		}
 	}
-	ld->waste.head = ld->waste.tail = NULL;
+
+	return res;
 }
 
+/*
+ * Remove all dictionary states which wasn't used for current token
+ */
 static void
-moveToWaste(LexizeData *ld, ParsedLex *stop)
+LexizeExecClearDictStates(LexizeData *ld)
 {
-	bool		go = true;
+	int			i;
 
-	while (ld->towork.head && go)
+	for (i = 0; i < ld->dslist.listLength; i++)
 	{
-		if (ld->towork.head == stop)
+		if (!ld->dslist.states[i].processed)
 		{
-			ld->curSub = stop->next;
-			go = false;
+			DictStateListRemove(&ld->dslist, ld->dslist.states[i].relatedDictionary);
+			i = 0;
 		}
-		RemoveHead(ld);
 	}
 }
 
-static void
-setNewTmpRes(LexizeData *ld, ParsedLex *lex, TSLexeme *res)
+/*
+ * Check if there are any dictionaries that didn't processed current token
+ */
+static bool
+LexizeExecNotProcessedDictStates(LexizeData *ld)
 {
-	if (ld->tmpRes)
-	{
-		TSLexeme   *ptr;
+	int			i;
 
-		for (ptr = ld->tmpRes; ptr->lexeme; ptr++)
-			pfree(ptr->lexeme);
-		pfree(ld->tmpRes);
-	}
-	ld->tmpRes = res;
-	ld->lastRes = lex;
+	for (i = 0; i < ld->dslist.listLength; i++)
+		if (!ld->dslist.states[i].processed)
+			return true;
+
+	return false;
 }
 
+/*
+ * Do a lexize processing for a towork queue in LexizeData
+ */
 static TSLexeme *
 LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 {
+	ParsedLex  *token;
+	TSMapElement *config;
+	TSLexeme   *res = NULL;
+	TSLexeme   *prevIterationResult = NULL;
+	bool		removeHead = false;
+	bool		resetSkipDictionary = false;
+	bool		accepted = false;
 	int			i;
-	ListDictionary *map;
-	TSDictionaryCacheEntry *dict;
-	TSLexeme   *res;
 
-	if (ld->curDictId == InvalidOid)
+	for (i = 0; i < ld->dslist.listLength; i++)
+		ld->dslist.states[i].processed = false;
+	if (ld->skipDictionary != InvalidOid)
+		resetSkipDictionary = true;
+
+	token = ld->towork.head;
+	if (token == NULL)
 	{
-		/*
-		 * usual mode: dictionary wants only one word, but we should keep in
-		 * mind that we should go through all stack
-		 */
+		setCorrLex(ld, correspondLexem);
+		return NULL;
+	}
 
-		while (ld->towork.head)
+	if (token->type >= ld->cfg->lenmap)
+	{
+		removeHead = true;
+	}
+	else
+	{
+		config = ld->cfg->map[token->type];
+		if (config != NULL)
+		{
+			res = LexizeExecTSElement(ld, token, config);
+			prevIterationResult = LexizeExecGetPreviousResults(ld);
+			removeHead = prevIterationResult == NULL;
+		}
+		else
 		{
-			ParsedLex  *curVal = ld->towork.head;
-			char	   *curValLemm = curVal->lemm;
-			int			curValLenLemm = curVal->lenlemm;
+			removeHead = true;
+			if (token->type == 0)	/* Processing EOF-like token */
+			{
+				res = LexizeExecFinishProcessing(ld);
+				prevIterationResult = NULL;
+			}
+		}
 
-			map = ld->cfg->map + curVal->type;
+		if (LexizeExecNotProcessedDictStates(ld) && (token->type == 0 || config != NULL))	/* Rollback processing */
+		{
+			int			i;
+			ListParsedLex *intermediateTokens = NULL;
+			ListParsedLex *acceptedTokens = NULL;
 
-			if (curVal->type == 0 || curVal->type >= ld->cfg->lenmap || map->len == 0)
+			for (i = 0; i < ld->dslist.listLength; i++)
 			{
-				/* skip this type of lexeme */
-				RemoveHead(ld);
-				continue;
+				if (!ld->dslist.states[i].processed)
+				{
+					intermediateTokens = &ld->dslist.states[i].intermediateTokens;
+					acceptedTokens = &ld->dslist.states[i].acceptedTokens;
+					if (prevIterationResult == NULL)
+						ld->skipDictionary = ld->dslist.states[i].relatedDictionary;
+				}
 			}
 
-			for (i = ld->posDict; i < map->len; i++)
+			if (intermediateTokens && intermediateTokens->head)
 			{
-				dict = lookup_ts_dictionary_cache(map->dictIds[i]);
-
-				ld->dictState.isend = ld->dictState.getnext = false;
-				ld->dictState.private_state = NULL;
-				res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-																 &(dict->lexize),
-																 PointerGetDatum(dict->dictData),
-																 PointerGetDatum(curValLemm),
-																 Int32GetDatum(curValLenLemm),
-																 PointerGetDatum(&ld->dictState)
-																 ));
-
-				if (ld->dictState.getnext)
+				ParsedLex  *head = ld->towork.head;
+
+				ld->towork.head = intermediateTokens->head;
+				intermediateTokens->tail->next = head;
+				head->next = NULL;
+				ld->towork.tail = head;
+				removeHead = false;
+				LPLClear(&ld->waste);
+				if (acceptedTokens && acceptedTokens->head)
 				{
-					/*
-					 * dictionary wants next word, so setup and store current
-					 * position and go to multiword mode
-					 */
-
-					ld->curDictId = DatumGetObjectId(map->dictIds[i]);
-					ld->posDict = i + 1;
-					ld->curSub = curVal->next;
-					if (res)
-						setNewTmpRes(ld, curVal, res);
-					return LexizeExec(ld, correspondLexem);
+					ld->waste.head = acceptedTokens->head;
+					ld->waste.tail = acceptedTokens->tail;
 				}
+			}
+			ResultStorageClearLexemes(&ld->delayedResults);
+			if (config != NULL)
+				res = NULL;
+		}
 
-				if (!res)		/* dictionary doesn't know this lexeme */
-					continue;
+		if (config != NULL)
+			LexizeExecClearDictStates(ld);
+		else if (token->type == 0)
+			DictStateListClear(&ld->dslist);
+	}
 
-				if (res->flags & TSL_FILTER)
-				{
-					curValLemm = res->lexeme;
-					curValLenLemm = strlen(res->lexeme);
-					continue;
-				}
+	if (prevIterationResult)
+		res = prevIterationResult;
+	else
+	{
+		int			i;
 
-				RemoveHead(ld);
-				setCorrLex(ld, correspondLexem);
-				return res;
+		for (i = 0; i < ld->dslist.listLength; i++)
+		{
+			if (ld->dslist.states[i].storeToAccepted)
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].acceptedTokens, token);
+				accepted = true;
+				ld->dslist.states[i].storeToAccepted = false;
+			}
+			else
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].intermediateTokens, token);
 			}
-
-			RemoveHead(ld);
 		}
 	}
-	else
-	{							/* curDictId is valid */
-		dict = lookup_ts_dictionary_cache(ld->curDictId);
 
+	if (removeHead)
+		RemoveHead(ld);
+
+	if (ld->dslist.listLength > 0)
+	{
 		/*
-		 * Dictionary ld->curDictId asks  us about following words
+		 * There is at least one thesaurus dictionary in the middle of
+		 * processing. Delay return of the result to avoid wrong lexemes in
+		 * case of thesaurus phrase rejection.
 		 */
+		ResultStorageAdd(&ld->delayedResults, token, res);
+		if (accepted)
+			ResultStorageMoveToAccepted(&ld->delayedResults);
 
-		while (ld->curSub)
+		/*
+		 * Current value of res should not be cleared, because it is stored in
+		 * LexemesBuffer
+		 */
+		res = NULL;
+	}
+	else
+	{
+		if (ld->towork.head == NULL)
 		{
-			ParsedLex  *curVal = ld->curSub;
-
-			map = ld->cfg->map + curVal->type;
-
-			if (curVal->type != 0)
-			{
-				bool		dictExists = false;
-
-				if (curVal->type >= ld->cfg->lenmap || map->len == 0)
-				{
-					/* skip this type of lexeme */
-					ld->curSub = curVal->next;
-					continue;
-				}
+			TSLexeme   *oldAccepted = ld->delayedResults.accepted;
 
-				/*
-				 * We should be sure that current type of lexeme is recognized
-				 * by our dictionary: we just check is it exist in list of
-				 * dictionaries ?
-				 */
-				for (i = 0; i < map->len && !dictExists; i++)
-					if (ld->curDictId == DatumGetObjectId(map->dictIds[i]))
-						dictExists = true;
-
-				if (!dictExists)
-				{
-					/*
-					 * Dictionary can't work with current tpe of lexeme,
-					 * return to basic mode and redo all stored lexemes
-					 */
-					ld->curDictId = InvalidOid;
-					return LexizeExec(ld, correspondLexem);
-				}
-			}
+			ld->delayedResults.accepted = TSLexemeUnionOpt(ld->delayedResults.accepted, ld->delayedResults.lexemes, true);
+			if (oldAccepted)
+				pfree(oldAccepted);
+		}
 
-			ld->dictState.isend = (curVal->type == 0) ? true : false;
-			ld->dictState.getnext = false;
+		/*
+		 * Add accepted delayed results to the output of the parsing. All
+		 * lexemes returned during thesaurus phrase processing should be
+		 * returned simultaneously, since all phrase tokens are processed as
+		 * one.
+		 */
+		if (ld->delayedResults.accepted != NULL)
+		{
+			/*
+			 * Previous value of res should not be cleared, because it is
+			 * stored in LexemesBuffer
+			 */
+			res = TSLexemeUnionOpt(ld->delayedResults.accepted, res, prevIterationResult == NULL);
 
-			res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-															 &(dict->lexize),
-															 PointerGetDatum(dict->dictData),
-															 PointerGetDatum(curVal->lemm),
-															 Int32GetDatum(curVal->lenlemm),
-															 PointerGetDatum(&ld->dictState)
-															 ));
+			ResultStorageClearLexemes(&ld->delayedResults);
+			ResultStorageClearAccepted(&ld->delayedResults);
+		}
+		setCorrLex(ld, correspondLexem);
+	}
 
-			if (ld->dictState.getnext)
-			{
-				/* Dictionary wants one more */
-				ld->curSub = curVal->next;
-				if (res)
-					setNewTmpRes(ld, curVal, res);
-				continue;
-			}
+	if (resetSkipDictionary)
+		ld->skipDictionary = InvalidOid;
 
-			if (res || ld->tmpRes)
-			{
-				/*
-				 * Dictionary normalizes lexemes, so we remove from stack all
-				 * used lexemes, return to basic mode and redo end of stack
-				 * (if it exists)
-				 */
-				if (res)
-				{
-					moveToWaste(ld, ld->curSub);
-				}
-				else
-				{
-					res = ld->tmpRes;
-					moveToWaste(ld, ld->lastRes);
-				}
+	res = TSLexemeFilterMulti(res);
+	if (res)
+		res = TSLexemeRemoveDuplications(res);
 
-				/* reset to initial state */
-				ld->curDictId = InvalidOid;
-				ld->posDict = 0;
-				ld->lastRes = NULL;
-				ld->tmpRes = NULL;
-				setCorrLex(ld, correspondLexem);
-				return res;
-			}
+	/*
+	 * Copy result since it may be stored in LexemesBuffere and removed at the
+	 * next step.
+	 */
+	if (res)
+	{
+		TSLexeme   *oldRes = res;
+		int			resSize = TSLexemeGetSize(res);
 
-			/*
-			 * Dict don't want next lexem and didn't recognize anything, redo
-			 * from ld->towork.head
-			 */
-			ld->curDictId = InvalidOid;
-			return LexizeExec(ld, correspondLexem);
-		}
+		res = palloc0(sizeof(TSLexeme) * (resSize + 1));
+		memcpy(res, oldRes, sizeof(TSLexeme) * resSize);
 	}
 
-	setCorrLex(ld, correspondLexem);
-	return NULL;
+	LexemesBufferClear(&ld->buffer);
+	return res;
 }
 
+/*-------------------
+ * ts_parse API functions
+ *-------------------
+ */
+
 /*
  * Parse string and lexize words.
  *
@@ -357,7 +1473,7 @@ LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 void
 parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
@@ -375,36 +1491,42 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		while ((norms = LexizeExec(&ldata, NULL)) != NULL)
 		{
-			TSLexeme   *ptr = norms;
+			TSLexeme   *ptr;
+
+			ptr = norms;
 
 			prs->pos++;			/* set pos */
 
@@ -429,14 +1551,246 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 			}
 			pfree(norms);
 		}
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
 
+/*-------------------
+ * ts_debug and helper functions
+ *-------------------
+ */
+
+/*
+ * Free memory occupied by temporary TSMapElement
+ */
+
+static void
+ts_debug_free_rule(TSMapElement *element)
+{
+	if (element != NULL && element->type == TSMAP_EXPRESSION)
+	{
+		ts_debug_free_rule(element->value.objectExpression->left);
+		ts_debug_free_rule(element->value.objectExpression->right);
+		pfree(element->value.objectExpression);
+		pfree(element);
+	}
+}
+
+/*
+ * Initialize SRF context and text parser for ts_debug execution.
+ */
+static void
+ts_debug_init(Oid cfgId, text *inputText, FunctionCallInfo fcinfo)
+{
+	TupleDesc	tupdesc;
+	char	   *buf;
+	int			buflen;
+	FuncCallContext *funcctx;
+	MemoryContext oldcontext;
+	TSDebugContext *context;
+
+	funcctx = SRF_FIRSTCALL_INIT();
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+	buf = text_to_cstring(inputText);
+	buflen = strlen(buf);
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("function returning record called in context "
+						"that cannot accept type record")));
+
+	funcctx->user_fctx = palloc0(sizeof(TSDebugContext));
+	funcctx->attinmeta = TupleDescGetAttInMetadata(tupdesc);
+
+	context = funcctx->user_fctx;
+	context->cfg = lookup_ts_config_cache(cfgId);
+	context->prsobj = lookup_ts_parser_cache(context->cfg->prsId);
+
+	context->tokenTypes = (LexDescr *) DatumGetPointer(OidFunctionCall1(context->prsobj->lextypeOid,
+																		(Datum) 0));
+
+	context->prsdata = (void *) DatumGetPointer(FunctionCall2(&context->prsobj->prsstart,
+															  PointerGetDatum(buf),
+															  Int32GetDatum(buflen)));
+	LexizeInit(&context->ldata, context->cfg);
+	context->ldata.debugContext = true;
+	context->tokentype = 1;
+
+	MemoryContextSwitchTo(oldcontext);
+}
+
+/*
+ * Get one token from input text and add it to processing queue.
+ */
+static void
+ts_debug_get_token(FuncCallContext *funcctx)
+{
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+	int			lenlemm;
+	char	   *lemm = NULL;
+
+	context = funcctx->user_fctx;
+
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+	context->tokentype = DatumGetInt32(FunctionCall3(&(context->prsobj->prstoken),
+													 PointerGetDatum(context->prsdata),
+													 PointerGetDatum(&lemm),
+													 PointerGetDatum(&lenlemm)));
+
+	if (context->tokentype > 0 && lenlemm >= MAXSTRLEN)
+	{
+#ifdef IGNORE_LONGLEXEME
+		ereport(NOTICE,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#else
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#endif
+	}
+
+	LexizeAddLemm(&context->ldata, context->tokentype, lemm, lenlemm);
+	MemoryContextSwitchTo(oldcontext);
+}
+
 /*
+ * Parse text and print debug information, such as token type, dictionary map
+ * configuration, selected command and lexemes for each token.
+ * Arguments: regconfiguration(Oid) cfgId, text *inputText
+ */
+Datum
+ts_debug(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		Oid			cfgId = PG_GETARG_OID(0);
+		text	   *inputText = PG_GETARG_TEXT_P(1);
+
+		ts_debug_init(cfgId, inputText, fcinfo);
+	}
+
+	funcctx = SRF_PERCALL_SETUP();
+	context = funcctx->user_fctx;
+
+	while (context->tokentype > 0 && context->leftTokens == NULL)
+	{
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+		ts_debug_get_token(funcctx);
+
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	while (context->leftTokens == NULL && context->ldata.towork.head != NULL)
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+	if (context->leftTokens && context->leftTokens && context->leftTokens->type > 0)
+	{
+		HeapTuple	tuple;
+		Datum		result;
+		char	  **values;
+		ParsedLex  *lex = context->leftTokens;
+		StringInfo	str = NULL;
+		TSLexeme   *ptr;
+
+		values = palloc0(sizeof(char *) * 7);
+		str = makeStringInfo();
+		initStringInfo(str);
+
+		values[0] = context->tokenTypes[lex->type - 1].alias;
+		values[1] = context->tokenTypes[lex->type - 1].descr;
+
+		values[2] = palloc0(sizeof(char) * (lex->lenlemm + 1));
+		memcpy(values[2], lex->lemm, sizeof(char) * lex->lenlemm);
+
+		initStringInfo(str);
+		appendStringInfoChar(str, '{');
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			Oid *dictionaries = TSMapGetDictionaries(context->ldata.cfg->map[lex->type]);
+			Oid *currentDictionary = NULL;
+			for (currentDictionary = dictionaries; *currentDictionary != InvalidOid; currentDictionary++)
+			{
+				if (currentDictionary != dictionaries)
+					appendStringInfoChar(str, ',');
+
+				TSMapPrintDictName(*currentDictionary, str);
+			}
+		}
+		appendStringInfoChar(str, '}');
+		values[3] = str->data;
+
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			initStringInfo(str);
+			TSMapPrintElement(context->ldata.cfg->map[lex->type], str);
+			values[4] = str->data;
+
+			initStringInfo(str);
+			if (lex->relatedRule)
+			{
+				TSMapPrintElement(lex->relatedRule, str);
+				values[5] = str->data;
+				str = makeStringInfo();
+				initStringInfo(str);
+				ts_debug_free_rule(lex->relatedRule);
+				lex->relatedRule = NULL;
+			}
+		}
+
+		initStringInfo(str);
+		ptr = context->savedLexemes;
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '{');
+
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr != context->savedLexemes)
+				appendStringInfoString(str, ", ");
+			appendStringInfoString(str, ptr->lexeme);
+			ptr++;
+		}
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '}');
+		if (context->savedLexemes)
+			values[6] = str->data;
+		else
+			values[6] = NULL;
+
+		tuple = BuildTupleFromCStrings(funcctx->attinmeta, values);
+		result = HeapTupleGetDatum(tuple);
+
+		context->leftTokens = lex->next;
+		pfree(lex);
+		if (context->leftTokens == NULL && context->savedLexemes)
+			pfree(context->savedLexemes);
+
+		SRF_RETURN_NEXT(funcctx, result);
+	}
+
+	FunctionCall1(&(context->prsobj->prsend), PointerGetDatum(context->prsdata));
+	SRF_RETURN_DONE(funcctx);
+}
+
+/*-------------------
  * Headline framework
+ *-------------------
  */
+
 static void
 hladdword(HeadlineParsedText *prs, char *buf, int buflen, int type)
 {
@@ -532,12 +1886,12 @@ addHLParsedLex(HeadlineParsedText *prs, TSQuery query, ParsedLex *lexs, TSLexeme
 void
 hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
 	TSLexeme   *norms;
-	ParsedLex  *lexs;
+	ParsedLex  *lexs = NULL;
 	TSConfigCacheEntry *cfg;
 	TSParserCacheEntry *prsobj;
 	void	   *prsdata;
@@ -551,32 +1905,36 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		do
 		{
@@ -587,9 +1945,10 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 			}
 			else
 				addHLParsedLex(prs, query, lexs, NULL);
+			lexs = NULL;
 		} while (norms);
 
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
@@ -642,14 +2001,14 @@ generateHeadline(HeadlineParsedText *prs)
 			}
 			else if (!wrd->skip)
 			{
-				if (wrd->selected)
+				if (wrd->selected && (wrd == prs->words || !(wrd - 1)->selected))
 				{
 					memcpy(ptr, prs->startsel, prs->startsellen);
 					ptr += prs->startsellen;
 				}
 				memcpy(ptr, wrd->word, wrd->len);
 				ptr += wrd->len;
-				if (wrd->selected)
+				if (wrd->selected && ((wrd + 1 - prs->words) == prs->curwords || !(wrd + 1)->selected))
 				{
 					memcpy(ptr, prs->stopsel, prs->stopsellen);
 					ptr += prs->stopsellen;
diff --git a/src/backend/tsearch/ts_utils.c b/src/backend/tsearch/ts_utils.c
index f6e03ae..0dd846b 100644
--- a/src/backend/tsearch/ts_utils.c
+++ b/src/backend/tsearch/ts_utils.c
@@ -20,7 +20,6 @@
 #include "tsearch/ts_locale.h"
 #include "tsearch/ts_utils.h"
 
-
 /*
  * Given the base name and extension of a tsearch config file, return
  * its full path name.  The base name is assumed to be user-supplied,
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 2b38178..f251e83 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -828,11 +828,10 @@ static const struct cachedesc cacheinfo[] = {
 	},
 	{TSConfigMapRelationId,		/* TSCONFIGMAP */
 		TSConfigMapIndexId,
-		3,
+		2,
 		{
 			Anum_pg_ts_config_map_mapcfg,
 			Anum_pg_ts_config_map_maptokentype,
-			Anum_pg_ts_config_map_mapseqno,
 			0
 		},
 		2
diff --git a/src/backend/utils/cache/ts_cache.c b/src/backend/utils/cache/ts_cache.c
index 9734778..1ff1a92 100644
--- a/src/backend/utils/cache/ts_cache.c
+++ b/src/backend/utils/cache/ts_cache.c
@@ -39,6 +39,7 @@
 #include "catalog/pg_ts_template.h"
 #include "commands/defrem.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/catcache.h"
 #include "utils/fmgroids.h"
@@ -51,13 +52,12 @@
 
 
 /*
- * MAXTOKENTYPE/MAXDICTSPERTT are arbitrary limits on the workspace size
+ * MAXTOKENTYPE is arbitrary limits on the workspace size
  * used in lookup_ts_config_cache().  We could avoid hardwiring a limit
  * by making the workspace dynamically enlargeable, but it seems unlikely
  * to be worth the trouble.
  */
-#define MAXTOKENTYPE	256
-#define MAXDICTSPERTT	100
+#define MAXTOKENTYPE		256
 
 
 static HTAB *TSParserCacheHash = NULL;
@@ -418,11 +418,10 @@ lookup_ts_config_cache(Oid cfgId)
 		ScanKeyData mapskey;
 		SysScanDesc mapscan;
 		HeapTuple	maptup;
-		ListDictionary maplists[MAXTOKENTYPE + 1];
-		Oid			mapdicts[MAXDICTSPERTT];
+		TSMapElement *mapconfigs[MAXTOKENTYPE + 1];
 		int			maxtokentype;
-		int			ndicts;
 		int			i;
+		TSMapElement *tmpConfig;
 
 		tp = SearchSysCache1(TSCONFIGOID, ObjectIdGetDatum(cfgId));
 		if (!HeapTupleIsValid(tp))
@@ -453,8 +452,8 @@ lookup_ts_config_cache(Oid cfgId)
 			if (entry->map)
 			{
 				for (i = 0; i < entry->lenmap; i++)
-					if (entry->map[i].dictIds)
-						pfree(entry->map[i].dictIds);
+					if (entry->map[i])
+						TSMapElementFree(entry->map[i]);
 				pfree(entry->map);
 			}
 		}
@@ -468,13 +467,11 @@ lookup_ts_config_cache(Oid cfgId)
 		/*
 		 * Scan pg_ts_config_map to gather dictionary list for each token type
 		 *
-		 * Because the index is on (mapcfg, maptokentype, mapseqno), we will
-		 * see the entries in maptokentype order, and in mapseqno order for
-		 * each token type, even though we didn't explicitly ask for that.
+		 * Because the index is on (mapcfg, maptokentype), we will see the
+		 * entries in maptokentype order even though we didn't explicitly ask
+		 * for that.
 		 */
-		MemSet(maplists, 0, sizeof(maplists));
 		maxtokentype = 0;
-		ndicts = 0;
 
 		ScanKeyInit(&mapskey,
 					Anum_pg_ts_config_map_mapcfg,
@@ -486,6 +483,7 @@ lookup_ts_config_cache(Oid cfgId)
 		mapscan = systable_beginscan_ordered(maprel, mapidx,
 											 NULL, 1, &mapskey);
 
+		memset(mapconfigs, 0, sizeof(mapconfigs));
 		while ((maptup = systable_getnext_ordered(mapscan, ForwardScanDirection)) != NULL)
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
@@ -495,51 +493,27 @@ lookup_ts_config_cache(Oid cfgId)
 				elog(ERROR, "maptokentype value %d is out of range", toktype);
 			if (toktype < maxtokentype)
 				elog(ERROR, "maptokentype entries are out of order");
-			if (toktype > maxtokentype)
-			{
-				/* starting a new token type, but first save the prior data */
-				if (ndicts > 0)
-				{
-					maplists[maxtokentype].len = ndicts;
-					maplists[maxtokentype].dictIds = (Oid *)
-						MemoryContextAlloc(CacheMemoryContext,
-										   sizeof(Oid) * ndicts);
-					memcpy(maplists[maxtokentype].dictIds, mapdicts,
-						   sizeof(Oid) * ndicts);
-				}
-				maxtokentype = toktype;
-				mapdicts[0] = cfgmap->mapdict;
-				ndicts = 1;
-			}
-			else
-			{
-				/* continuing data for current token type */
-				if (ndicts >= MAXDICTSPERTT)
-					elog(ERROR, "too many pg_ts_config_map entries for one token type");
-				mapdicts[ndicts++] = cfgmap->mapdict;
-			}
+
+			maxtokentype = toktype;
+			tmpConfig = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			mapconfigs[maxtokentype] = TSMapMoveToMemoryContext(tmpConfig, CacheMemoryContext);
+			TSMapElementFree(tmpConfig);
+			tmpConfig = NULL;
 		}
 
 		systable_endscan_ordered(mapscan);
 		index_close(mapidx, AccessShareLock);
 		heap_close(maprel, AccessShareLock);
 
-		if (ndicts > 0)
+		if (maxtokentype > 0)
 		{
-			/* save the last token type's dictionaries */
-			maplists[maxtokentype].len = ndicts;
-			maplists[maxtokentype].dictIds = (Oid *)
-				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(Oid) * ndicts);
-			memcpy(maplists[maxtokentype].dictIds, mapdicts,
-				   sizeof(Oid) * ndicts);
-			/* and save the overall map */
+			/* save the overall map */
 			entry->lenmap = maxtokentype + 1;
-			entry->map = (ListDictionary *)
+			entry->map = (TSMapElement * *)
 				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(ListDictionary) * entry->lenmap);
-			memcpy(entry->map, maplists,
-				   sizeof(ListDictionary) * entry->lenmap);
+								   sizeof(TSMapElement *) * entry->lenmap);
+			memcpy(entry->map, mapconfigs,
+				   sizeof(TSMapElement *) * entry->lenmap);
 		}
 
 		entry->isvalid = true;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index d066f4f..c5cb3c6 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -14223,15 +14223,29 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 	PQclear(res);
 
 	resetPQExpBuffer(query);
-	appendPQExpBuffer(query,
-					  "SELECT\n"
-					  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
-					  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
-					  "FROM pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE m.mapcfg = '%u'\n"
-					  "ORDER BY m.mapcfg, m.maptokentype, m.mapseqno",
-					  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+
+	if (fout->remoteVersion >= 110000)
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+	else
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, m.mapseqno\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
 	ntups = PQntuples(res);
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 0c3be1f..729242e 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -4646,25 +4646,41 @@ describeOneTSConfig(const char *oid, const char *nspname, const char *cfgname,
 
 	initPQExpBuffer(&buf);
 
-	printfPQExpBuffer(&buf,
-					  "SELECT\n"
-					  "  ( SELECT t.alias FROM\n"
-					  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
-					  "  pg_catalog.btrim(\n"
-					  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
-					  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
-					  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
-					  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
-					  "    ) :: pg_catalog.text,\n"
-					  "  '{}') AS \"%s\"\n"
-					  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
-					  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
-					  "ORDER BY 1;",
-					  gettext_noop("Token"),
-					  gettext_noop("Dictionaries"),
-					  oid);
+	if (pset.sversion >= 110000)
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  " dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+	else
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  "  pg_catalog.btrim(\n"
+						  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
+						  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
+						  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
+						  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
+						  "    ) :: pg_catalog.text,\n"
+						  "  '{}') AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+
 
 	res = PSQLexec(buf.data);
 	termPQExpBuffer(&buf);
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index 7dd9d10..589bce4 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -262,7 +262,7 @@ DECLARE_UNIQUE_INDEX(pg_ts_config_cfgname_index, 3608, on pg_ts_config using btr
 DECLARE_UNIQUE_INDEX(pg_ts_config_oid_index, 3712, on pg_ts_config using btree(oid oid_ops));
 #define TSConfigOidIndexId	3712
 
-DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops, mapseqno int4_ops));
+DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops));
 #define TSConfigMapIndexId	3609
 
 DECLARE_UNIQUE_INDEX(pg_ts_dict_dictname_index, 3604, on pg_ts_dict using btree(dictname name_ops, dictnamespace oid_ops));
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 9bf20c0..bd9549a 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -4988,6 +4988,12 @@ DESCR("transform jsonb to tsvector");
 DATA(insert OID = 4212 (  to_tsvector		PGNSP PGUID 12 100 0 0 0 f f f t f i s 2 0 3614 "3734 114" _null_ _null_ _null_ _null_ _null_ json_to_tsvector_byid _null_ _null_ _null_ ));
 DESCR("transform json to tsvector");
 
+DATA(insert OID = 8891 (  dictionary_mapping_to_text	PGNSP PGUID 12 100 0 0 0 f f f t f s s 2 0 25 "26 23" _null_ _null_ _null_ _null_ _null_ dictionary_mapping_to_text _null_ _null_ _null_ ));
+DESCR("returns text representation of dictionary configuration map");
+
+DATA(insert OID = 8892 (  ts_debug			PGNSP PGUID 12 100 1 0 0 f f f t t s s 2 0 2249 "3734 25" "{3734,25,25,25,25,3770,25,25,1009}" "{i,i,o,o,o,o,o,o,o}" "{cfgId,inputText,alias,description,token,dictionaries,configuration,command,lexemes}" _null_ _null_ ts_debug _null_ _null_ _null_));
+DESCR("debug function for text search configuration");
+
 DATA(insert OID = 3752 (  tsvector_update_trigger			PGNSP PGUID 12 1 0 0 0 f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_byid _null_ _null_ _null_ ));
 DESCR("trigger for automatic update of tsvector column");
 DATA(insert OID = 3753 (  tsvector_update_trigger_column	PGNSP PGUID 12 1 0 0 0 f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_bycolumn _null_ _null_ _null_ ));
diff --git a/src/include/catalog/pg_ts_config_map.h b/src/include/catalog/pg_ts_config_map.h
index a3d9e3f..6bcd44a 100644
--- a/src/include/catalog/pg_ts_config_map.h
+++ b/src/include/catalog/pg_ts_config_map.h
@@ -22,6 +22,7 @@
 #define PG_TS_CONFIG_MAP_H
 
 #include "catalog/genbki.h"
+#include "utils/jsonb.h"
 
 /* ----------------
  *		pg_ts_config_map definition.  cpp turns this into
@@ -30,49 +31,109 @@
  */
 #define TSConfigMapRelationId	3603
 
+/*
+ * Create a typedef in order to use same type name in
+ * generated DB initialization script and C source code
+ */
+typedef Jsonb jsonb;
+
 CATALOG(pg_ts_config_map,3603) BKI_WITHOUT_OIDS
 {
 	Oid			mapcfg;			/* OID of configuration owning this entry */
 	int32		maptokentype;	/* token type from parser */
-	int32		mapseqno;		/* order in which to consult dictionaries */
-	Oid			mapdict;		/* dictionary to consult */
+	jsonb		mapdicts;		/* dictionary map Jsonb representation */
 } FormData_pg_ts_config_map;
 
 typedef FormData_pg_ts_config_map *Form_pg_ts_config_map;
 
+/*
+ * Element of the mapping expression tree
+ */
+typedef struct TSMapElement
+{
+	int			type; /* Type of the element */
+	union
+	{
+		struct TSMapExpression *objectExpression;
+		struct TSMapCase *objectCase;
+		Oid			objectDictionary;
+		void	   *object;
+	} value;
+	struct TSMapElement *parent; /* Parent in the expression tree */
+} TSMapElement;
+
+/*
+ * Representation of expression with operator and two operands
+ */
+typedef struct TSMapExpression
+{
+	int			operator;
+	TSMapElement *left;
+	TSMapElement *right;
+} TSMapExpression;
+
+/*
+ * Representation of CASE structure inside database
+ */
+typedef struct TSMapCase
+{
+	TSMapElement *condition;
+	TSMapElement *command;
+	TSMapElement *elsebranch;
+	bool		match;	/* If false, NO MATCH is used */
+} TSMapCase;
+
 /* ----------------
- *		compiler constants for pg_ts_config_map
+ *		Compiler constants for pg_ts_config_map
  * ----------------
  */
-#define Natts_pg_ts_config_map				4
+#define Natts_pg_ts_config_map				3
 #define Anum_pg_ts_config_map_mapcfg		1
 #define Anum_pg_ts_config_map_maptokentype	2
-#define Anum_pg_ts_config_map_mapseqno		3
-#define Anum_pg_ts_config_map_mapdict		4
+#define Anum_pg_ts_config_map_mapdicts		3
+
+/* ----------------
+ *		Dictionary map operators
+ * ----------------
+ */
+#define TSMAP_OP_MAP			1
+#define TSMAP_OP_UNION			2
+#define TSMAP_OP_EXCEPT			3
+#define TSMAP_OP_INTERSECT		4
+#define TSMAP_OP_COMMA			5
+
+/* ----------------
+ *		TSMapElement object types
+ * ----------------
+ */
+#define TSMAP_EXPRESSION	1
+#define TSMAP_CASE			2
+#define TSMAP_DICTIONARY	3
+#define TSMAP_KEEP			4
 
 /* ----------------
  *		initial contents of pg_ts_config_map
  * ----------------
  */
 
-DATA(insert ( 3748	1	1	3765 ));
-DATA(insert ( 3748	2	1	3765 ));
-DATA(insert ( 3748	3	1	3765 ));
-DATA(insert ( 3748	4	1	3765 ));
-DATA(insert ( 3748	5	1	3765 ));
-DATA(insert ( 3748	6	1	3765 ));
-DATA(insert ( 3748	7	1	3765 ));
-DATA(insert ( 3748	8	1	3765 ));
-DATA(insert ( 3748	9	1	3765 ));
-DATA(insert ( 3748	10	1	3765 ));
-DATA(insert ( 3748	11	1	3765 ));
-DATA(insert ( 3748	15	1	3765 ));
-DATA(insert ( 3748	16	1	3765 ));
-DATA(insert ( 3748	17	1	3765 ));
-DATA(insert ( 3748	18	1	3765 ));
-DATA(insert ( 3748	19	1	3765 ));
-DATA(insert ( 3748	20	1	3765 ));
-DATA(insert ( 3748	21	1	3765 ));
-DATA(insert ( 3748	22	1	3765 ));
+DATA(insert ( 3748	1	"[3765]" ));
+DATA(insert ( 3748	2	"[3765]" ));
+DATA(insert ( 3748	3	"[3765]" ));
+DATA(insert ( 3748	4	"[3765]" ));
+DATA(insert ( 3748	5	"[3765]" ));
+DATA(insert ( 3748	6	"[3765]" ));
+DATA(insert ( 3748	7	"[3765]" ));
+DATA(insert ( 3748	8	"[3765]" ));
+DATA(insert ( 3748	9	"[3765]" ));
+DATA(insert ( 3748	10	"[3765]" ));
+DATA(insert ( 3748	11	"[3765]" ));
+DATA(insert ( 3748	15	"[3765]" ));
+DATA(insert ( 3748	16	"[3765]" ));
+DATA(insert ( 3748	17	"[3765]" ));
+DATA(insert ( 3748	18	"[3765]" ));
+DATA(insert ( 3748	19	"[3765]" ));
+DATA(insert ( 3748	20	"[3765]" ));
+DATA(insert ( 3748	21	"[3765]" ));
+DATA(insert ( 3748	22	"[3765]" ));
 
 #endif							/* PG_TS_CONFIG_MAP_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index fce4802..1d3896d 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -385,6 +385,9 @@ typedef enum NodeTag
 	T_CreateEnumStmt,
 	T_CreateRangeStmt,
 	T_AlterEnumStmt,
+	T_DictMapExprElem,
+	T_DictMapElem,
+	T_DictMapCase,
 	T_AlterTSDictionaryStmt,
 	T_AlterTSConfigurationStmt,
 	T_CreateFdwStmt,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 699fa77..6103b12 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3434,6 +3434,50 @@ typedef enum AlterTSConfigType
 	ALTER_TSCONFIG_DROP_MAPPING
 } AlterTSConfigType;
 
+/*
+ * TS Configuration expression tree element's types
+ */
+typedef enum DictMapElemType
+{
+	DICT_MAP_CASE,
+	DICT_MAP_EXPRESSION,
+	DICT_MAP_KEEP,
+	DICT_MAP_DICTIONARY
+} DictMapElemType;
+
+/*
+ * TS Configuration expression tree abstract element
+ */
+typedef struct DictMapElem
+{
+	NodeTag		type;
+	int8		kind;			/* See DictMapElemType */
+	void	   *data;			/* Type should be detected by kind value */
+} DictMapElem;
+
+/*
+ * TS Configuration expression tree element with operator and operands
+ */
+typedef struct DictMapExprElem
+{
+	NodeTag		type;
+	DictMapElem *left;
+	DictMapElem *right;
+	int8		oper;
+} DictMapExprElem;
+
+/*
+ * TS Configuration expression tree CASE element
+ */
+typedef struct DictMapCase
+{
+	NodeTag		type;
+	struct DictMapElem *condition;
+	struct DictMapElem *command;
+	struct DictMapElem *elsebranch;
+	bool		match;
+} DictMapCase;
+
 typedef struct AlterTSConfigurationStmt
 {
 	NodeTag		type;
@@ -3446,6 +3490,7 @@ typedef struct AlterTSConfigurationStmt
 	 */
 	List	   *tokentype;		/* list of Value strings */
 	List	   *dicts;			/* list of list of Value strings */
+	DictMapElem *dict_map;		/* tree of the mapping expression */
 	bool		override;		/* if true - remove old variant */
 	bool		replace;		/* if true - replace dictionary by another */
 	bool		missing_ok;		/* for DROP - skip error if missing? */
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 4dff55a..3371f28 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -220,6 +220,7 @@ PG_KEYWORD("is", IS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isnull", ISNULL, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isolation", ISOLATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("join", JOIN, TYPE_FUNC_NAME_KEYWORD)
+PG_KEYWORD("keep", KEEP, RESERVED_KEYWORD)
 PG_KEYWORD("key", KEY, UNRESERVED_KEYWORD)
 PG_KEYWORD("label", LABEL, UNRESERVED_KEYWORD)
 PG_KEYWORD("language", LANGUAGE, UNRESERVED_KEYWORD)
@@ -242,6 +243,7 @@ PG_KEYWORD("location", LOCATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("lock", LOCK_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("locked", LOCKED, UNRESERVED_KEYWORD)
 PG_KEYWORD("logged", LOGGED, UNRESERVED_KEYWORD)
+PG_KEYWORD("map", MAP, UNRESERVED_KEYWORD)
 PG_KEYWORD("mapping", MAPPING, UNRESERVED_KEYWORD)
 PG_KEYWORD("match", MATCH, UNRESERVED_KEYWORD)
 PG_KEYWORD("matched", MATCHED, UNRESERVED_KEYWORD)
diff --git a/src/include/tsearch/ts_cache.h b/src/include/tsearch/ts_cache.h
index 410f1d5..4633dd7 100644
--- a/src/include/tsearch/ts_cache.h
+++ b/src/include/tsearch/ts_cache.h
@@ -14,6 +14,7 @@
 #define TS_CACHE_H
 
 #include "utils/guc.h"
+#include "catalog/pg_ts_config_map.h"
 
 
 /*
@@ -66,6 +67,7 @@ typedef struct
 {
 	int			len;
 	Oid		   *dictIds;
+	int32	   *dictOptions;
 } ListDictionary;
 
 typedef struct
@@ -77,7 +79,7 @@ typedef struct
 	Oid			prsId;
 
 	int			lenmap;
-	ListDictionary *map;
+	TSMapElement **map;
 } TSConfigCacheEntry;
 
 
diff --git a/src/include/tsearch/ts_configmap.h b/src/include/tsearch/ts_configmap.h
new file mode 100644
index 0000000..79e6180
--- /dev/null
+++ b/src/include/tsearch/ts_configmap.h
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.h
+ *	  internal representation of text search configuration and utilities for it
+ *
+ * Copyright (c) 1998-2018, PostgreSQL Global Development Group
+ *
+ * src/include/tsearch/ts_utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PG_TS_CONFIGMAP_H_
+#define _PG_TS_CONFIGMAP_H_
+
+#include "utils/jsonb.h"
+#include "catalog/pg_ts_config_map.h"
+
+/*
+ * Configuration storage functions
+ * Provide interface to convert ts_configuration into JSONB and vice versa
+ */
+
+/* Convert TSMapElement structure into JSONB */
+extern Jsonb *TSMapToJsonb(TSMapElement *config);
+
+/* Extract TSMapElement from JSONB formated data */
+extern TSMapElement *JsonbToTSMap(Jsonb *json);
+/* Replace all occurances of oldDict by newDict */
+extern void TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict);
+
+/* Move rule list into specified memory context */
+extern TSMapElement *TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context);
+/* Free all nodes of the rule list */
+extern void TSMapElementFree(TSMapElement *element);
+
+/* Print map in human-readable format */
+extern void TSMapPrintElement(TSMapElement *config, StringInfo result);
+
+/* Print dictionary name for a given Oid */
+extern void TSMapPrintDictName(Oid dictId, StringInfo result);
+
+/* Return all dictionaries used in config */
+extern Oid *TSMapGetDictionaries(TSMapElement *config);
+
+/* Do a deep comparison of two TSMapElements. Doesn't check parents of elements */
+extern bool TSMapElementEquals(TSMapElement *a, TSMapElement *b);
+
+#endif							/* _PG_TS_CONFIGMAP_H_ */
diff --git a/src/include/tsearch/ts_public.h b/src/include/tsearch/ts_public.h
index 0b7a5aa..d970eec 100644
--- a/src/include/tsearch/ts_public.h
+++ b/src/include/tsearch/ts_public.h
@@ -115,6 +115,7 @@ typedef struct
 #define TSL_ADDPOS		0x01
 #define TSL_PREFIX		0x02
 #define TSL_FILTER		0x04
+#define TSL_MULTI		0x08
 
 /*
  * Struct for supporting complex dictionaries like thesaurus.
diff --git a/src/test/regress/expected/oidjoins.out b/src/test/regress/expected/oidjoins.out
index d56c70c..08c2674 100644
--- a/src/test/regress/expected/oidjoins.out
+++ b/src/test/regress/expected/oidjoins.out
@@ -1089,14 +1089,6 @@ WHERE	mapcfg != 0 AND
 ------+--------
 (0 rows)
 
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
- ctid | mapdict 
-------+---------
-(0 rows)
-
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/expected/tsdicts.out b/src/test/regress/expected/tsdicts.out
index 0c1d7c7..04ac38b 100644
--- a/src/test/regress/expected/tsdicts.out
+++ b/src/test/regress/expected/tsdicts.out
@@ -420,6 +420,105 @@ SELECT ts_lexize('thesaurus', 'one');
  {1}
 (1 row)
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+SELECT to_tsvector('english_union', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'books');
+    to_tsvector     
+--------------------
+ 'book':1 'books':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+SELECT to_tsvector('english_intersect', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'books');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'booking');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+SELECT to_tsvector('english_except', 'book');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_except', 'books');
+ to_tsvector 
+-------------
+ 'books':1
+(1 row)
+
+SELECT to_tsvector('english_except', 'booking');
+ to_tsvector 
+-------------
+ 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+SELECT to_tsvector('english_branches', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -580,6 +679,153 @@ SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a
  'card':3,10 'invit':2,9 'like':6 'look':5 'order':1,8
 (1 row)
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                                     to_tsvector                                      
+--------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+              to_tsvector              
+---------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                     to_tsvector                     
+-----------------------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5 'supernova':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                              to_tsvector                               
+------------------------------------------------------------------------
+ '1987a':6 'mysterious':2 'of':4 'rings':3 'sn':5 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+            Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |                     Dictionaries                      
+-----------------+-------------------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN MATCH THEN simple UNION thesaurus+
+                 | ELSE simple                                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+      to_tsvector       
+------------------------
+ '12':1 'one':1 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+            to_tsvector            
+-----------------------------------
+ '123':1 'one':1 'three':3 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+           to_tsvector           
+---------------------------------
+ '12':1 'four':3 'one':1 'two':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+      Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |               Dictionaries               
+-----------------+------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN NO MATCH THEN simple+
+                 | ELSE thesaurus                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector    
+------------------
+ '12':1 'books':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector   
+-----------------
+ '12':1 'book':2
+(1 row)
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/expected/tsearch.out b/src/test/regress/expected/tsearch.out
index d63fb12..c0e9fc5 100644
--- a/src/test/regress/expected/tsearch.out
+++ b/src/test/regress/expected/tsearch.out
@@ -36,11 +36,11 @@ WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 -----+---------
 (0 rows)
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
- mapcfg | maptokentype | mapseqno 
---------+--------------+----------
+WHERE mapcfg = 0;
+ mapcfg | maptokentype 
+--------+--------------
 (0 rows)
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
@@ -51,8 +51,8 @@ RIGHT JOIN pg_ts_config_map AS m
     ON (tt.cfgid=m.mapcfg AND tt.tokid=m.maptokentype)
 WHERE
     tt.cfgid IS NULL OR tt.tokid IS NULL;
- cfgid | tokid | mapcfg | maptokentype | mapseqno | mapdict 
--------+-------+--------+--------------+----------+---------
+ cfgid | tokid | mapcfg | maptokentype | mapdicts 
+-------+-------+--------+--------------+----------
 (0 rows)
 
 -- test basic text search behavior without indexes, then with
@@ -567,55 +567,55 @@ SELECT length(to_tsvector('english', '345 qwe@efd.r '' http://www.com/ http://ae
 
 -- ts_debug
 SELECT * from ts_debug('english', '<myns:foo-bar_baz.blurfl>abc&nm1;def&#xa9;ghi&#245;jkl</myns:foo-bar_baz.blurfl>');
-   alias   |   description   |           token            |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+----------------------------+----------------+--------------+---------
- tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |              | 
- asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem | {abc}
- entity    | XML entity      | &nm1;                      | {}             |              | 
- asciiword | Word, all ASCII | def                        | {english_stem} | english_stem | {def}
- entity    | XML entity      | &#xa9;                     | {}             |              | 
- asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem | {ghi}
- entity    | XML entity      | &#245;                     | {}             |              | 
- asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem | {jkl}
- tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |              | 
+   alias   |   description   |           token            |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+----------------------------+----------------+---------------+--------------+---------
+ tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |               |              | 
+ asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem  | english_stem | {abc}
+ entity    | XML entity      | &nm1;                      | {}             |               |              | 
+ asciiword | Word, all ASCII | def                        | {english_stem} | english_stem  | english_stem | {def}
+ entity    | XML entity      | &#xa9;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem  | english_stem | {ghi}
+ entity    | XML entity      | &#245;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem  | english_stem | {jkl}
+ tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |               |              | 
 (9 rows)
 
 -- check parsing of URLs
 SELECT * from ts_debug('english', 'http://www.harewoodsolutions.co.uk/press.aspx</span>');
-  alias   |  description  |                 token                  | dictionaries | dictionary |                 lexemes                  
-----------+---------------+----------------------------------------+--------------+------------+------------------------------------------
- protocol | Protocol head | http://                                | {}           |            | 
- url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple     | {www.harewoodsolutions.co.uk/press.aspx}
- host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple     | {www.harewoodsolutions.co.uk}
- url_path | URL path      | /press.aspx                            | {simple}     | simple     | {/press.aspx}
- tag      | XML tag       | </span>                                | {}           |            | 
+  alias   |  description  |                 token                  | dictionaries | configuration | command |                 lexemes                  
+----------+---------------+----------------------------------------+--------------+---------------+---------+------------------------------------------
+ protocol | Protocol head | http://                                | {}           |               |         | 
+ url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk/press.aspx}
+ host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk}
+ url_path | URL path      | /press.aspx                            | {simple}     | simple        | simple  | {/press.aspx}
+ tag      | XML tag       | </span>                                | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
-  alias   |  description  |           token            | dictionaries | dictionary |           lexemes            
-----------+---------------+----------------------------+--------------+------------+------------------------------
- protocol | Protocol head | http://                    | {}           |            | 
- url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple     | {aew.wer0c.ewr/id?ad=qwe&dw}
- host     | Host          | aew.wer0c.ewr              | {simple}     | simple     | {aew.wer0c.ewr}
- url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple     | {/id?ad=qwe&dw}
- tag      | XML tag       | <span>                     | {}           |            | 
+  alias   |  description  |           token            | dictionaries | configuration | command |           lexemes            
+----------+---------------+----------------------------+--------------+---------------+---------+------------------------------
+ protocol | Protocol head | http://                    | {}           |               |         | 
+ url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple        | simple  | {aew.wer0c.ewr/id?ad=qwe&dw}
+ host     | Host          | aew.wer0c.ewr              | {simple}     | simple        | simple  | {aew.wer0c.ewr}
+ url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple        | simple  | {/id?ad=qwe&dw}
+ tag      | XML tag       | <span>                     | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
-  alias   |  description  |        token         | dictionaries | dictionary |        lexemes         
-----------+---------------+----------------------+--------------+------------+------------------------
- protocol | Protocol head | http://              | {}           |            | 
- url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple     | {5aew.werc.ewr:8100/?}
- host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path      | /?                   | {simple}     | simple     | {/?}
+  alias   |  description  |        token         | dictionaries | configuration | command |        lexemes         
+----------+---------------+----------------------+--------------+---------------+---------+------------------------
+ protocol | Protocol head | http://              | {}           |               |         | 
+ url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?}
+ host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path      | /?                   | {simple}     | simple        | simple  | {/?}
 (4 rows)
 
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
-  alias   | description |         token          | dictionaries | dictionary |         lexemes          
-----------+-------------+------------------------+--------------+------------+--------------------------
- url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple     | {5aew.werc.ewr:8100/?xx}
- host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path    | /?xx                   | {simple}     | simple     | {/?xx}
+  alias   | description |         token          | dictionaries | configuration | command |         lexemes          
+----------+-------------+------------------------+--------------+---------------+---------+--------------------------
+ url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?xx}
+ host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path    | /?xx                   | {simple}     | simple        | simple  | {/?xx}
 (3 rows)
 
 SELECT token, alias,
diff --git a/src/test/regress/sql/oidjoins.sql b/src/test/regress/sql/oidjoins.sql
index 656cace..4e6730f 100644
--- a/src/test/regress/sql/oidjoins.sql
+++ b/src/test/regress/sql/oidjoins.sql
@@ -545,10 +545,6 @@ SELECT	ctid, mapcfg
 FROM	pg_catalog.pg_ts_config_map fk
 WHERE	mapcfg != 0 AND
 	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_config pk WHERE pk.oid = fk.mapcfg);
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/sql/tsdicts.sql b/src/test/regress/sql/tsdicts.sql
index 1633c0d..8662820 100644
--- a/src/test/regress/sql/tsdicts.sql
+++ b/src/test/regress/sql/tsdicts.sql
@@ -117,6 +117,57 @@ CREATE TEXT SEARCH DICTIONARY thesaurus (
 
 SELECT ts_lexize('thesaurus', 'one');
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+
+SELECT to_tsvector('english_union', 'book');
+SELECT to_tsvector('english_union', 'books');
+SELECT to_tsvector('english_union', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+
+SELECT to_tsvector('english_intersect', 'book');
+SELECT to_tsvector('english_intersect', 'books');
+SELECT to_tsvector('english_intersect', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+
+SELECT to_tsvector('english_except', 'book');
+SELECT to_tsvector('english_except', 'books');
+SELECT to_tsvector('english_except', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+
+SELECT to_tsvector('english_branches', 'book');
+SELECT to_tsvector('english_branches', 'books');
+SELECT to_tsvector('english_branches', 'booking');
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -189,6 +240,43 @@ SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one');
 SELECT to_tsvector('thesaurus_tst', 'Supernovae star is very new star and usually called supernovae (abbreviation SN)');
 SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a tickets');
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/sql/tsearch.sql b/src/test/regress/sql/tsearch.sql
index 1c8520b..6f8af63 100644
--- a/src/test/regress/sql/tsearch.sql
+++ b/src/test/regress/sql/tsearch.sql
@@ -26,9 +26,9 @@ SELECT oid, cfgname
 FROM pg_ts_config
 WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
+WHERE mapcfg = 0;
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
 SELECT * FROM
#23Teodor Sigaev
teodor@sigaev.ru
In reply to: Aleksandr Parfenov (#22)
Re: Flexible configuration for full-text search

Some notices:

0) patch conflicts with last changes in gram.y, conflicts are trivial.

1) jsonb in catalog. I'm ok with it, any opinions?

2) pg_ts_config_map.h, "jsonb mapdicts" isn't decorated with #ifdef
CATALOG_VARLEN like other varlena columns in catalog. It it's right, pls,
explain and add comment.

3) I see changes in pg_catalog, including drop column, change its type, change
index, change function etc. Did you pay attention to pg_upgrade? I don't see it
in patch.

4) Seems, it could work:
ALTER TEXT SEARCH CONFIGURATION russian
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
word, hword, hword_part
WITH english_stem union (russian_stem, simple);
^^^^^^^^^^^^^^^^^^^^^ simple way instead of
WITH english_stem union (case russian_stem when match then keep else simple end);

4) Initial approach suggested to distinguish three state of dictionary result:
null (unknown word), stopword and usual word. Now only two, we lost possibility
to catch stopwords. One of way to use stopwrods is: let we have to identical fts
configurations, except one skips stopwords and another doesn't. Second
configuration is used for indexing, and first one for search by default. But if
we can't find anything ('to be or to be' - phrase contains stopwords only)
then we can use second configuration. For now, we need to keep two variant of
each dictionary - with and without stopwords. But if it's possible to
distinguish stop and nonstop words in configuration then we don't need to have
duplicated dictionaries.

Aleksandr Parfenov wrote:

On Fri, 30 Mar 2018 14:43:30 +0000
Aleksander Alekseev <a.alekseev@postgrespro.ru> wrote:

The following review has been posted through the commitfest
application: make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation: tested, passed

LGTM.

The new status of this patch is: Ready for Committer

It seems that after d204ef6 (MERGE SQL Command) in master the patch
doesn't apply due to a conflict in keywords lists (grammar and header).
The new version of the patch without conflicts is attached.

--
Teodor Sigaev E-mail: teodor@sigaev.ru
WWW: http://www.sigaev.ru/

#24Andres Freund
andres@anarazel.de
In reply to: Teodor Sigaev (#23)
Re: Flexible configuration for full-text search

Hi,

On 2018-04-05 17:26:10 +0300, Teodor Sigaev wrote:

Some notices:

0) patch conflicts with last changes in gram.y, conflicts are trivial.

1) jsonb in catalog. I'm ok with it, any opinions?

2) pg_ts_config_map.h, "jsonb mapdicts" isn't decorated with #ifdef
CATALOG_VARLEN like other varlena columns in catalog. It it's right, pls,
explain and add comment.

3) I see changes in pg_catalog, including drop column, change its type,
change index, change function etc. Did you pay attention to pg_upgrade? I
don't see it in patch.

4) Seems, it could work:
ALTER TEXT SEARCH CONFIGURATION russian
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
word, hword, hword_part
WITH english_stem union (russian_stem, simple);
^^^^^^^^^^^^^^^^^^^^^ simple way instead of
WITH english_stem union (case russian_stem when match then keep else simple end);

4) Initial approach suggested to distinguish three state of dictionary
result: null (unknown word), stopword and usual word. Now only two, we lost
possibility to catch stopwords. One of way to use stopwrods is: let we have
to identical fts configurations, except one skips stopwords and another
doesn't. Second configuration is used for indexing, and first one for search
by default. But if we can't find anything ('to be or to be' - phrase
contains stopwords only) then we can use second configuration. For now, we
need to keep two variant of each dictionary - with and without stopwords.
But if it's possible to distinguish stop and nonstop words in configuration
then we don't need to have duplicated dictionaries.

Just to be clear: I object to attempting to merge this into v11. This
introduces new user interface, arrived late in the development cycle,
and hasn't seen that much review. Not something that should be merged
two minutes before midnight.

I think it's good to continue reviewing, don't get me wrong.

- Andres

#25Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Teodor Sigaev (#23)
1 attachment(s)
Re: Flexible configuration for full-text search

On Thu, 5 Apr 2018 17:26:10 +0300
Teodor Sigaev <teodor@sigaev.ru> wrote:

Some notices:

0) patch conflicts with last changes in gram.y, conflicts are trivial.

Yes, due to commits with MERGE command with changes in gram.y there
were some conflicts.

2) pg_ts_config_map.h, "jsonb mapdicts" isn't decorated with
#ifdef CATALOG_VARLEN like other varlena columns in catalog. It it's
right, pls, explain and add comment.

Since there is only one varlena column it is safe to use it directly. I
add a related comment about it.

3) I see changes in pg_catalog, including drop column, change its
type, change index, change function etc. Did you pay attention to
pg_upgrade? I don't see it in patch.

The full-text search configuration is migrated via FTS commands such
as CREATE TEXT SEARCH CONFIGURATION. The pg_upgrade uses pg_dump to
create a dump of this part of the catalog where
dictionary_mapping_to_text is used to get a textual representation of
the FTS configuration. Correct me if I'm wrong.

4) Seems, it could work:
ALTER TEXT SEARCH CONFIGURATION russian
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
word, hword, hword_part
WITH english_stem union (russian_stem, simple);
^^^^^^^^^^^^^^^^^^^^^ simple way
instead of WITH english_stem union (case russian_stem when match then
keep else simple end);

I add such ability since it was just a little fix in grammar. I also
add tests for this kind of configurations. The test is a bit
synthetic because I used a synonym dictionary as one which doesn't
accept some input.

4) Initial approach suggested to distinguish three state of
dictionary result: null (unknown word), stopword and usual word. Now
only two, we lost possibility to catch stopwords. One of way to use
stopwrods is: let we have to identical fts configurations, except one
skips stopwords and another doesn't. Second configuration is used for
indexing, and first one for search by default. But if we can't find
anything ('to be or to be' - phrase contains stopwords only) then we
can use second configuration. For now, we need to keep two variant of
each dictionary - with and without stopwords. But if it's possible to
distinguish stop and nonstop words in configuration then we don't
need to have duplicated dictionaries.

With the proposed way to configure it is possible to create a special
dictionary only for stopword checking and use it at decision-making
time.

For example, we can create dictionary english_stopword which will
return word itself in case of stopword and NULL otherwise. With such
dictionary we create a configuration:

ALTER TEXT SEARCH CONFIGURATION test_cfg ALTER MAPPING FOR asciiword,
word WITH
CASE english_stopword WHEN NO MATCH THEN english_hunspell END;

In described example, english_hunspell can be implemented without
processing of stopwords at all and we can divide stopword processing
and processing of other words into separate dictionaries.

The key point of the patch is to process stopwords the same way as
others at the level of the PostgreSQL internals and give users an
instrument to process them in a special way via configurations.

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

Attachments:

0001-flexible-fts-configuration-v11.patchtext/x-patchDownload
diff --git a/contrib/unaccent/expected/unaccent.out b/contrib/unaccent/expected/unaccent.out
index b93105e9c7..37b9337635 100644
--- a/contrib/unaccent/expected/unaccent.out
+++ b/contrib/unaccent/expected/unaccent.out
@@ -61,3 +61,14 @@ SELECT ts_lexize('unaccent', '
  {����}
 (1 row)
 
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
+         to_tsvector          
+------------------------------
+ 'foobar':1 '�����':2 '���':3
+(1 row)
+
diff --git a/contrib/unaccent/sql/unaccent.sql b/contrib/unaccent/sql/unaccent.sql
index 310213994f..6ce21cdfcd 100644
--- a/contrib/unaccent/sql/unaccent.sql
+++ b/contrib/unaccent/sql/unaccent.sql
@@ -2,7 +2,6 @@ CREATE EXTENSION unaccent;
 
 -- must have a UTF8 database
 SELECT getdatabaseencoding();
-
 SET client_encoding TO 'KOI8';
 
 SELECT unaccent('foobar');
@@ -16,3 +15,12 @@ SELECT unaccent('unaccent', '
 SELECT ts_lexize('unaccent', 'foobar');
 SELECT ts_lexize('unaccent', '����');
 SELECT ts_lexize('unaccent', '����');
+
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml
index ebe0b94b27..ecc37044a9 100644
--- a/doc/src/sgml/ref/alter_tsconfig.sgml
+++ b/doc/src/sgml/ref/alter_tsconfig.sgml
@@ -21,8 +21,12 @@ PostgreSQL documentation
 
  <refsynopsisdiv>
 <synopsis>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
@@ -88,6 +92,17 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
     </listitem>
    </varlistentry>
 
+   <varlistentry>
+    <term><replaceable class="parameter">config</replaceable></term>
+    <listitem>
+     <para>
+      The dictionaries tree expression. The dictionary expression
+      is a triple of condition/command/else that define way to process
+      the text. The <literal>ELSE</literal> part is optional.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry>
     <term><replaceable class="parameter">old_dictionary</replaceable></term>
     <listitem>
@@ -133,7 +148,7 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
      </para>
     </listitem>
    </varlistentry>
- </variablelist>
+  </variablelist>
 
   <para>
    The <literal>ADD MAPPING FOR</literal> form installs a list of dictionaries to be
@@ -154,6 +169,53 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
 
  </refsect1>
 
+ <refsect1>
+  <title>Dictionaries Map Configuration</title>
+
+  <refsect2>
+   <title>Format</title>
+   <para>
+    Formally <replaceable class="parameter">config</replaceable> is one of:
+   </para>
+   <programlisting>
+    * dictionary_name
+
+    * config { UNION | INTERSECT | EXCEPT | MAP } config
+
+    * CASE config
+        WHEN [ NO ] MATCH THEN { KEEP | config }
+        [ ELSE config ]
+      END
+   </programlisting>
+  </refsect2>
+
+  <refsect2>
+   <title>Description</title>
+   <para>
+    <replaceable class="parameter">config</replaceable> can be used
+    in three different formats. The most simple format is name of dictionary to
+    use for tokens processing.
+   </para>
+   <para>
+    In order to use more than one dictionary
+    simultaneously user should interconnect dictionaries by operators. Operators
+    <literal>UNION</literal>, <literal>EXCEPT</literal> and
+    <literal>INTERSECT</literal> have same meaning as in operations on sets.
+    Special operator <literal>MAP</literal> gets output of left subexpression
+    and uses it as an input to right subexpression.
+   </para>
+   <para>
+    The third format of <replaceable class="parameter">config</replaceable> is similar to
+    <literal>CASE/WHEN/THEN/ELSE</literal> structure. It's consists of three
+    replaceable parts. First one is configuration which is used to construct lexemes set
+    for matching condition. If the condition is triggered, the command is executed.
+    Use command <literal>KEEP</literal> to avoid repeating of the same
+    configuration in condition and command part. However, command may differ from
+    the condition. The <literal>ELSE</literal> branch is executed otherwise.
+   </para>
+  </refsect2>
+ </refsect1>
+
  <refsect1>
   <title>Examples</title>
 
@@ -167,6 +229,34 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
 ALTER TEXT SEARCH CONFIGURATION my_config
   ALTER MAPPING REPLACE english WITH swedish;
 </programlisting>
+
+  <para>
+   Next example shows how to analyse documents in both English and German languages.
+   <literal>english_hunspell</literal> and <literal>german_hunspell</literal>
+   return result only if a word is recognized. Otherwise, stemmer dictionaries
+   are used to process a token.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH
+   CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+    UNION
+   CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+    In order to combine search for both exact and processed forms the vector
+    should contain lexemes produced by <literal>simple</literal> for exact form
+    of the word as well as lexemes produced by linguistic-aware dictionary
+    (e.g. <literal>english_stem</literal>) for processed forms.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml
index 610b7bf033..1253b41f53 100644
--- a/doc/src/sgml/textsearch.sgml
+++ b/doc/src/sgml/textsearch.sgml
@@ -732,10 +732,11 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     The <function>to_tsvector</function> function internally calls a parser
     which breaks the document text into tokens and assigns a type to
     each token.  For each token, a list of
-    dictionaries (<xref linkend="textsearch-dictionaries"/>) is consulted,
-    where the list can vary depending on the token type.  The first dictionary
-    that <firstterm>recognizes</firstterm> the token emits one or more normalized
-    <firstterm>lexemes</firstterm> to represent the token.  For example,
+    condition/command pairs is consulted, where the list can vary depending
+    on the token type, condition and command are expressions on dictionaries
+    with matching clause in condition(<xref linkend="textsearch-dictionaries"/>).
+    The first command combined with true-resulted condition emits one or more normalized
+    <firstterm>lexemes</firstterm> to represent the token. For example,
     <literal>rats</literal> became <literal>rat</literal> because one of the
     dictionaries recognized that the word <literal>rats</literal> is a plural
     form of <literal>rat</literal>.  Some words are recognized as
@@ -743,7 +744,7 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     causes them to be ignored since they occur too frequently to be useful in
     searching.  In our example these are
     <literal>a</literal>, <literal>on</literal>, and <literal>it</literal>.
-    If no dictionary in the list recognizes the token then it is also ignored.
+    If none of conditions is <literal>true</literal> the token is ignored.
     In this example that happened to the punctuation sign <literal>-</literal>
     because there are in fact no dictionaries assigned for its token type
     (<literal>Space symbols</literal>), meaning space tokens will never be
@@ -2232,8 +2233,8 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
      <para>
       a single lexeme with the <literal>TSL_FILTER</literal> flag set, to replace
       the original token with a new token to be passed to subsequent
-      dictionaries (a dictionary that does this is called a
-      <firstterm>filtering dictionary</firstterm>)
+      dictionaries in a comma-separated syntax (a dictionary that does this
+      is called a <firstterm>filtering dictionary</firstterm>)
      </para>
     </listitem>
     <listitem>
@@ -2265,38 +2266,126 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
    type that the parser can return, a separate list of dictionaries is
    specified by the configuration.  When a token of that type is found
    by the parser, each dictionary in the list is consulted in turn,
-   until some dictionary recognizes it as a known word.  If it is identified
-   as a stop word, or if no dictionary recognizes the token, it will be
-   discarded and not indexed or searched for.
-   Normally, the first dictionary that returns a non-<literal>NULL</literal>
-   output determines the result, and any remaining dictionaries are not
-   consulted; but a filtering dictionary can replace the given word
-   with a modified word, which is then passed to subsequent dictionaries.
+   until command is not selected based on its condition. If none of cases is
+   selected token will be discarded and not indexed or searched for.
   </para>
 
   <para>
-   The general rule for configuring a list of dictionaries
-   is to place first the most narrow, most specific dictionary, then the more
-   general dictionaries, finishing with a very general dictionary, like
+   A tree of cases is described as condition/command/else triples. Each
+   condition is evaluated in order to select appropriate command to generate
+   resulted set of lexemes.
+  </para>
+
+  <para>
+   A condition is an expression with dictionaries used as operands and
+   basic set operators <literal>UNION</literal>, <literal>EXCEPT</literal>, <literal>INTERSECT</literal>
+   and special operator <literal>MAP</literal>.
+   Special operator <literal>MAP</literal> use output of left subexpression as
+   input for right subexpression.
+  </para>
+
+  <para>
+    Rules to write command are same as for condition with additional keyword
+    <literal>KEEP</literal> considered to use the result of the condition as an output.
+  </para>
+
+  <para>
+   A comma-separated list of dictionaries is a simplified variant of text
+   search configuration. Each dictionary consulted to process a token and first
+   non-<literal>NULL</literal> output is accepted as a processing result.
+  </para>
+
+  <para>
+   The general rule for configuring tokens processing
+   is to place first case with the most narrow, most specific dictionary, then the more
+   general dictionaries, finishing with a very general dictionaries, like
    a <application>Snowball</application> stemmer or <literal>simple</literal>, which
-   recognizes everything.  For example, for an astronomy-specific search
+   recognizes everything. For example, for an astronomy-specific search
    (<literal>astro_en</literal> configuration) one could bind token type
    <type>asciiword</type> (ASCII word) to a synonym dictionary of astronomical
    terms, a general English dictionary and a <application>Snowball</application> English
-   stemmer:
+   stemmer in comma-separated variant of mapping:
+  </para>
 
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION astro_en
     ADD MAPPING FOR asciiword WITH astrosyn, english_ispell, english_stem;
 </programlisting>
+
+  <para>
+   Another example is a configuration for both English and German languages via
+   operator-separated variant of mapping:
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION multi_en_de
+    ADD MAPPING FOR asciiword, word WITH
+        CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+         UNION
+        CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+   This configuration provides an ability to search on collection of multilingual
+   documents without specifying language:
+  </para>
+
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'lack');
+ id |                   txt
+----+-----------------------------------------
+  2 | with old stars and lacking gas and dust
+
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'beginnen');
+ id |                 txt
+----+-------------------------------------
+  1 | Das geschah zu Beginn dieses Monats
+</programlisting>
+
+  <para>
+   A combination of stemmer dictionary with <literal>simple</literal> one may be used to mix
+   search for exact form of one word and linguistic search for others.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION exact_and_linguistic
+    ADD MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
+  <para>
+   In the following example a simple dictionary is used to prevent words from normalization in query.
   </para>
 
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Supernova star'),
+                              (2, 'Supernova stars'))
+SELECT * FROM docs WHERE to_tsvector('exact_and_linguistic', txt) @@ (to_tsquery('simple', 'stars') &amp;&amp; to_tsquery('english', 'supernovae'));
+ id |       txt       
+----+-----------------
+  2 | Supernova stars
+</programlisting>
+
+   <caution>
+    <para>
+     Due to lack of information about origin of each lexeme in <literal>tsvector</literal> may
+     lead to false-positive triggers in case of stemmed form being used as exact form in a query.
+    </para>
+   </caution>
+
   <para>
-   A filtering dictionary can be placed anywhere in the list, except at the
-   end where it'd be useless.  Filtering dictionaries are useful to partially
+   Filtering dictionaries are useful to partially
    normalize words to simplify the task of later dictionaries.  For example,
    a filtering dictionary could be used to remove accents from accented
    letters, as is done by the <xref linkend="unaccent"/> module.
+   Filter dictionary should be placed at left of <literal>MAP</literal>
+   operator. If filter dictionary returns <literal>NULL</literal> it pass initial token
+   to the right subexpression.
   </para>
 
   <sect2 id="textsearch-stopwords">
@@ -2463,9 +2552,9 @@ SELECT ts_lexize('public.simple_dict','The');
 
 <screen>
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | Paris | {english_stem} | english_stem  | english_stem | {pari}
 
 CREATE TEXT SEARCH DICTIONARY my_synonym (
     TEMPLATE = synonym,
@@ -2477,9 +2566,12 @@ ALTER TEXT SEARCH CONFIGURATION english
     WITH my_synonym, english_stem;
 
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |       dictionaries        | dictionary | lexemes 
------------+-----------------+-------+---------------------------+------------+---------
- asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris}
+   alias   |   description   | token |       dictionaries        |                configuration                |  command   | lexemes 
+-----------+-----------------+-------+---------------------------+---------------------------------------------+------------+---------
+ asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | CASE my_synonym WHEN MATCH THEN KEEP       +| my_synonym | {paris}
+           |                 |       |                           | ELSE CASE english_stem WHEN MATCH THEN KEEP+|            | 
+           |                 |       |                           | END                                        +|            | 
+           |                 |       |                           | END                                         |            | 
 </screen>
    </para>
 
@@ -3104,6 +3196,21 @@ CREATE TEXT SEARCH DICTIONARY english_ispell (
     Now we can set up the mappings for words in configuration
     <literal>pg</literal>:
 
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION pg
+    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
+                      word, hword, hword_part
+    WITH 
+      CASE pg_dict WHEN MATCH THEN KEEP
+      ELSE
+          CASE english_ispell WHEN MATCH THEN KEEP
+          ELSE english_stem
+          END
+      END;
+</programlisting>
+
+    Or use alternative comma-separated syntax:
+
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION pg
     ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
@@ -3183,7 +3290,8 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
          OUT <replaceable class="parameter">description</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">token</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">dictionaries</replaceable> <type>regdictionary[]</type>,
-         OUT <replaceable class="parameter">dictionary</replaceable> <type>regdictionary</type>,
+         OUT <replaceable class="parameter">configuration</replaceable> <type>text</type>,
+         OUT <replaceable class="parameter">command</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)
          returns setof record
 </synopsis>
@@ -3227,14 +3335,20 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionary</replaceable> <type>regdictionary</type> &mdash; the dictionary
-       that recognized the token, or <literal>NULL</literal> if none did
+       <replaceable>configuration</replaceable> <type>text</type> &mdash; the
+       configuration defined for this token type
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       <replaceable>command</replaceable> <type>text</type> &mdash; the command that describes
+       the way the output was produced
       </para>
      </listitem>
      <listitem>
       <para>
        <replaceable>lexemes</replaceable> <type>text[]</type> &mdash; the lexeme(s) produced
-       by the dictionary that recognized the token, or <literal>NULL</literal> if
+       by the command selected according conditions, or <literal>NULL</literal> if
        none did; an empty array (<literal>{}</literal>) means it was recognized as a
        stop word
       </para>
@@ -3247,32 +3361,32 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
 
 <screen>
 SELECT * FROM ts_debug('english','a fat  cat sat on a mat - it ate a fat rats');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | cat   | {english_stem} | english_stem | {cat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | sat   | {english_stem} | english_stem | {sat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | on    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | mat   | {english_stem} | english_stem | {mat}
- blank     | Space symbols   |       | {}             |              | 
- blank     | Space symbols   | -     | {}             |              | 
- asciiword | Word, all ASCII | it    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | ate   | {english_stem} | english_stem | {ate}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | rats  | {english_stem} | english_stem | {rat}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | cat   | {english_stem} | english_stem  | english_stem | {cat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | sat   | {english_stem} | english_stem  | english_stem | {sat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | on    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | mat   | {english_stem} | english_stem  | english_stem | {mat}
+ blank     | Space symbols   |       |                |               |              | 
+ blank     | Space symbols   | -     |                |               |              | 
+ asciiword | Word, all ASCII | it    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | ate   | {english_stem} | english_stem  | english_stem | {ate}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | rats  | {english_stem} | english_stem  | english_stem | {rat}
 </screen>
   </para>
 
@@ -3298,13 +3412,22 @@ ALTER TEXT SEARCH CONFIGURATION public.english
 
 <screen>
 SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
-   alias   |   description   |    token    |         dictionaries          |   dictionary   |   lexemes   
------------+-----------------+-------------+-------------------------------+----------------+-------------
- asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | english_ispell | {}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | english_ispell | {bright}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | english_stem   | {supernova}
+   alias   |   description   |    token    |         dictionaries          |                configuration                |     command      |   lexemes   
+-----------+-----------------+-------------+-------------------------------+---------------------------------------------+------------------+-------------
+ asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {bright}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_stem     | {supernova}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
 </screen>
 
   <para>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index e9e188682f..34b80aea34 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -948,55 +948,14 @@ GRANT SELECT (subdbid, subname, subowner, subenabled, subslotname, subpublicatio
 -- Tsearch debug function.  Defined here because it'd be pretty unwieldy
 -- to put it into pg_proc.h
 
-CREATE FUNCTION ts_debug(IN config regconfig, IN document text,
-    OUT alias text,
-    OUT description text,
-    OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
-    OUT lexemes text[])
-RETURNS SETOF record AS
-$$
-SELECT
-    tt.alias AS alias,
-    tt.description AS description,
-    parse.token AS token,
-    ARRAY ( SELECT m.mapdict::pg_catalog.regdictionary
-            FROM pg_catalog.pg_ts_config_map AS m
-            WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-            ORDER BY m.mapseqno )
-    AS dictionaries,
-    ( SELECT mapdict::pg_catalog.regdictionary
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS dictionary,
-    ( SELECT pg_catalog.ts_lexize(mapdict, parse.token)
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS lexemes
-FROM pg_catalog.ts_parse(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 ), $2
-    ) AS parse,
-     pg_catalog.ts_token_type(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 )
-    ) AS tt
-WHERE tt.tokid = parse.tokid
-$$
-LANGUAGE SQL STRICT STABLE PARALLEL SAFE;
-
-COMMENT ON FUNCTION ts_debug(regconfig,text) IS
-    'debug function for text search configuration';
 
 CREATE FUNCTION ts_debug(IN document text,
     OUT alias text,
     OUT description text,
     OUT token text,
     OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
+    OUT configuration text,
+    OUT command text,
     OUT lexemes text[])
 RETURNS SETOF record AS
 $$
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index 3a843512d1..53ee576223 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -39,9 +39,12 @@
 #include "nodes/makefuncs.h"
 #include "parser/parse_func.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_public.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/jsonb.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "utils/syscache.h"
@@ -935,11 +938,22 @@ makeConfigurationDependencies(HeapTuple tuple, bool removeOld,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			TSMapElement *mapdicts = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			Oid		   *dictionaryOids = TSMapGetDictionaries(mapdicts);
+			Oid		   *currentOid = dictionaryOids;
 
-			referenced.classId = TSDictionaryRelationId;
-			referenced.objectId = cfgmap->mapdict;
-			referenced.objectSubId = 0;
-			add_exact_object_address(&referenced, addrs);
+			while (*currentOid != InvalidOid)
+			{
+				referenced.classId = TSDictionaryRelationId;
+				referenced.objectId = *currentOid;
+				referenced.objectSubId = 0;
+				add_exact_object_address(&referenced, addrs);
+
+				currentOid++;
+			}
+
+			pfree(dictionaryOids);
+			TSMapElementFree(mapdicts);
 		}
 
 		systable_endscan(scan);
@@ -1091,8 +1105,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			mapvalues[Anum_pg_ts_config_map_mapcfg - 1] = cfgOid;
 			mapvalues[Anum_pg_ts_config_map_maptokentype - 1] = cfgmap->maptokentype;
-			mapvalues[Anum_pg_ts_config_map_mapseqno - 1] = cfgmap->mapseqno;
-			mapvalues[Anum_pg_ts_config_map_mapdict - 1] = cfgmap->mapdict;
+			mapvalues[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(&cfgmap->mapdicts);
 
 			newmaptup = heap_form_tuple(mapRel->rd_att, mapvalues, mapnulls);
 
@@ -1195,7 +1208,7 @@ AlterTSConfiguration(AlterTSConfigurationStmt *stmt)
 	relMap = heap_open(TSConfigMapRelationId, RowExclusiveLock);
 
 	/* Add or drop mappings */
-	if (stmt->dicts)
+	if (stmt->dicts || stmt->dict_map)
 		MakeConfigurationMapping(stmt, tup, relMap);
 	else if (stmt->tokentype)
 		DropConfigurationMapping(stmt, tup, relMap);
@@ -1270,6 +1283,59 @@ getTokenTypes(Oid prsId, List *tokennames)
 	return res;
 }
 
+/*
+ * Parse parse node extracted from dictionary mapping and transform it into
+ * internal representation of dictionary mapping.
+ */
+static TSMapElement *
+ParseTSMapConfig(DictMapElem *elem)
+{
+	TSMapElement *result = palloc0(sizeof(TSMapElement));
+
+	if (elem->kind == DICT_MAP_CASE)
+	{
+		TSMapCase  *caseObject = palloc0(sizeof(TSMapCase));
+		DictMapCase *caseASTObject = elem->data;
+
+		caseObject->condition = ParseTSMapConfig(caseASTObject->condition);
+		caseObject->command = ParseTSMapConfig(caseASTObject->command);
+
+		if (caseASTObject->elsebranch)
+			caseObject->elsebranch = ParseTSMapConfig(caseASTObject->elsebranch);
+
+		caseObject->match = caseASTObject->match;
+
+		caseObject->condition->parent = result;
+		caseObject->command->parent = result;
+
+		result->type = TSMAP_CASE;
+		result->value.objectCase = caseObject;
+	}
+	else if (elem->kind == DICT_MAP_EXPRESSION)
+	{
+		TSMapExpression *expression = palloc0(sizeof(TSMapExpression));
+		DictMapExprElem *expressionAST = elem->data;
+
+		expression->left = ParseTSMapConfig(expressionAST->left);
+		expression->right = ParseTSMapConfig(expressionAST->right);
+		expression->operator = expressionAST->oper;
+
+		result->type = TSMAP_EXPRESSION;
+		result->value.objectExpression = expression;
+	}
+	else if (elem->kind == DICT_MAP_KEEP)
+	{
+		result->value.objectExpression = NULL;
+		result->type = TSMAP_KEEP;
+	}
+	else if (elem->kind == DICT_MAP_DICTIONARY)
+	{
+		result->value.objectDictionary = get_ts_dict_oid(elem->data, false);
+		result->type = TSMAP_DICTIONARY;
+	}
+	return result;
+}
+
 /*
  * ALTER TEXT SEARCH CONFIGURATION ADD/ALTER MAPPING
  */
@@ -1286,8 +1352,9 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	Oid			prsId;
 	int		   *tokens,
 				ntoken;
-	Oid		   *dictIds;
-	int			ndict;
+	Oid		   *dictIds = NULL;
+	int			ndict = 0;
+	TSMapElement *config = NULL;
 	ListCell   *c;
 
 	prsId = ((Form_pg_ts_config) GETSTRUCT(tup))->cfgparser;
@@ -1326,15 +1393,18 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	/*
 	 * Convert list of dictionary names to array of dict OIDs
 	 */
-	ndict = list_length(stmt->dicts);
-	dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
-	i = 0;
-	foreach(c, stmt->dicts)
+	if (stmt->dicts)
 	{
-		List	   *names = (List *) lfirst(c);
+		ndict = list_length(stmt->dicts);
+		dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
+		i = 0;
+		foreach(c, stmt->dicts)
+		{
+			List	   *names = (List *) lfirst(c);
 
-		dictIds[i] = get_ts_dict_oid(names, false);
-		i++;
+			dictIds[i] = get_ts_dict_oid(names, false);
+			i++;
+		}
 	}
 
 	if (stmt->replace)
@@ -1356,6 +1426,10 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			Datum		repl_val[Natts_pg_ts_config_map];
+			bool		repl_null[Natts_pg_ts_config_map];
+			bool		repl_repl[Natts_pg_ts_config_map];
+			HeapTuple	newtup;
 
 			/*
 			 * check if it's one of target token types
@@ -1379,25 +1453,21 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 			/*
 			 * replace dictionary if match
 			 */
-			if (cfgmap->mapdict == dictOld)
-			{
-				Datum		repl_val[Natts_pg_ts_config_map];
-				bool		repl_null[Natts_pg_ts_config_map];
-				bool		repl_repl[Natts_pg_ts_config_map];
-				HeapTuple	newtup;
-
-				memset(repl_val, 0, sizeof(repl_val));
-				memset(repl_null, false, sizeof(repl_null));
-				memset(repl_repl, false, sizeof(repl_repl));
-
-				repl_val[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictNew);
-				repl_repl[Anum_pg_ts_config_map_mapdict - 1] = true;
-
-				newtup = heap_modify_tuple(maptup,
-										   RelationGetDescr(relMap),
-										   repl_val, repl_null, repl_repl);
-				CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
-			}
+			config = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			TSMapReplaceDictionary(config, dictOld, dictNew);
+
+			memset(repl_val, 0, sizeof(repl_val));
+			memset(repl_null, false, sizeof(repl_null));
+			memset(repl_repl, false, sizeof(repl_repl));
+
+			repl_val[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
+			repl_repl[Anum_pg_ts_config_map_mapdicts - 1] = true;
+
+			newtup = heap_modify_tuple(maptup,
+									   RelationGetDescr(relMap),
+									   repl_val, repl_null, repl_repl);
+			CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
+			pfree(config);
 		}
 
 		systable_endscan(scan);
@@ -1407,24 +1477,22 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		/*
 		 * Insertion of new entries
 		 */
+		config = ParseTSMapConfig(stmt->dict_map);
+
 		for (i = 0; i < ntoken; i++)
 		{
-			for (j = 0; j < ndict; j++)
-			{
-				Datum		values[Natts_pg_ts_config_map];
-				bool		nulls[Natts_pg_ts_config_map];
+			Datum		values[Natts_pg_ts_config_map];
+			bool		nulls[Natts_pg_ts_config_map];
 
-				memset(nulls, false, sizeof(nulls));
-				values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
-				values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
-				values[Anum_pg_ts_config_map_mapseqno - 1] = Int32GetDatum(j + 1);
-				values[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictIds[j]);
+			memset(nulls, false, sizeof(nulls));
+			values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
+			values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
+			values[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
 
-				tup = heap_form_tuple(relMap->rd_att, values, nulls);
-				CatalogTupleInsert(relMap, tup);
+			tup = heap_form_tuple(relMap->rd_att, values, nulls);
+			CatalogTupleInsert(relMap, tup);
 
-				heap_freetuple(tup);
-			}
+			heap_freetuple(tup);
 		}
 	}
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index c3efca3c45..a2235c3c0c 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4439,6 +4439,42 @@ _copyReassignOwnedStmt(const ReassignOwnedStmt *from)
 	return newnode;
 }
 
+static DictMapElem *
+_copyDictMapElem(const DictMapElem *from)
+{
+	DictMapElem *newnode = makeNode(DictMapElem);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(data);
+
+	return newnode;
+}
+
+static DictMapExprElem *
+_copyDictMapExprElem(const DictMapExprElem *from)
+{
+	DictMapExprElem *newnode = makeNode(DictMapExprElem);
+
+	COPY_NODE_FIELD(left);
+	COPY_NODE_FIELD(right);
+	COPY_SCALAR_FIELD(oper);
+
+	return newnode;
+}
+
+static DictMapCase *
+_copyDictMapCase(const DictMapCase *from)
+{
+	DictMapCase *newnode = makeNode(DictMapCase);
+
+	COPY_NODE_FIELD(condition);
+	COPY_NODE_FIELD(command);
+	COPY_NODE_FIELD(elsebranch);
+	COPY_SCALAR_FIELD(match);
+
+	return newnode;
+}
+
 static AlterTSDictionaryStmt *
 _copyAlterTSDictionaryStmt(const AlterTSDictionaryStmt *from)
 {
@@ -5452,6 +5488,15 @@ copyObjectImpl(const void *from)
 		case T_ReassignOwnedStmt:
 			retval = _copyReassignOwnedStmt(from);
 			break;
+		case T_DictMapExprElem:
+			retval = _copyDictMapExprElem(from);
+			break;
+		case T_DictMapElem:
+			retval = _copyDictMapElem(from);
+			break;
+		case T_DictMapCase:
+			retval = _copyDictMapCase(from);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _copyAlterTSDictionaryStmt(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 45ceba2830..71a8f9b914 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2217,6 +2217,36 @@ _equalReassignOwnedStmt(const ReassignOwnedStmt *a, const ReassignOwnedStmt *b)
 	return true;
 }
 
+static bool
+_equalDictMapElem(const DictMapElem *a, const DictMapElem *b)
+{
+	COMPARE_NODE_FIELD(data);
+	COMPARE_SCALAR_FIELD(kind);
+
+	return true;
+}
+
+static bool
+_equalDictMapExprElem(const DictMapExprElem *a, const DictMapExprElem *b)
+{
+	COMPARE_NODE_FIELD(left);
+	COMPARE_NODE_FIELD(right);
+	COMPARE_SCALAR_FIELD(oper);
+
+	return true;
+}
+
+static bool
+_equalDictMapCase(const DictMapCase *a, const DictMapCase *b)
+{
+	COMPARE_NODE_FIELD(condition);
+	COMPARE_NODE_FIELD(command);
+	COMPARE_NODE_FIELD(elsebranch);
+	COMPARE_SCALAR_FIELD(match);
+
+	return true;
+}
+
 static bool
 _equalAlterTSDictionaryStmt(const AlterTSDictionaryStmt *a, const AlterTSDictionaryStmt *b)
 {
@@ -3575,6 +3605,15 @@ equal(const void *a, const void *b)
 		case T_ReassignOwnedStmt:
 			retval = _equalReassignOwnedStmt(a, b);
 			break;
+		case T_DictMapExprElem:
+			retval = _equalDictMapExprElem(a, b);
+			break;
+		case T_DictMapElem:
+			retval = _equalDictMapElem(a, b);
+			break;
+		case T_DictMapCase:
+			retval = _equalDictMapCase(a, b);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _equalAlterTSDictionaryStmt(a, b);
 			break;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index b879358de1..84ae8b17f4 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -52,6 +52,7 @@
 #include "catalog/namespace.h"
 #include "catalog/pg_am.h"
 #include "catalog/pg_trigger.h"
+#include "catalog/pg_ts_config_map.h"
 #include "commands/defrem.h"
 #include "commands/trigger.h"
 #include "nodes/makefuncs.h"
@@ -241,6 +242,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionSpec		*partspec;
 	PartitionBoundSpec	*partboundspec;
 	RoleSpec			*rolespec;
+	DictMapElem			*dmapelem;
 }
 
 %type <node>	stmt schema_stmt
@@ -310,7 +312,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				analyze_option_list analyze_option_elem
 %type <boolean>	opt_or_replace
 				opt_grant_grant_option opt_grant_admin_option
-				opt_nowait opt_if_exists opt_with_data
+				opt_nowait opt_if_exists opt_with_data opt_dictionary_map_no
 %type <ival>	opt_nowait_or_skip
 
 %type <list>	OptRoleList AlterOptRoleList
@@ -585,6 +587,12 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>		hash_partbound partbound_datum_list range_datum_list
 %type <defelt>		hash_partbound_elem
 
+%type <ival>		dictionary_map_set_expr_operator
+%type <dmapelem>	dictionary_map_dict dictionary_map_command_expr_paren
+					dictionary_config dictionary_map_case
+					dictionary_map_action opt_dictionary_map_case_else
+					dictionary_config_comma
+
 %type <node>	merge_when_clause opt_and_condition
 %type <list>	merge_when_list
 %type <node>	merge_update merge_delete merge_insert
@@ -650,13 +658,13 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	JOIN
 
-	KEY
+	KEEP KEY
 
 	LABEL LANGUAGE LARGE_P LAST_P LATERAL_P
 	LEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL
 	LOCALTIME LOCALTIMESTAMP LOCATION LOCK_P LOCKED LOGGED
 
-	MAPPING MATCH MATCHED MATERIALIZED MAXVALUE MERGE METHOD
+	MAP MAPPING MATCH MATCHED MATERIALIZED MAXVALUE MERGE METHOD
 	MINUTE_P MINVALUE MODE MONTH_P MOVE
 
 	NAME_P NAMES NATIONAL NATURAL NCHAR NEW NEXT NO NONE
@@ -10355,24 +10363,26 @@ AlterTSDictionaryStmt:
 		;
 
 AlterTSConfigurationStmt:
-			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with any_name_list
+			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ADD_MAPPING;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = false;
 					n->replace = false;
 					$$ = (Node*)n;
 				}
-			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with any_name_list
+			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ALTER_MAPPING_FOR_TOKEN;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = true;
 					n->replace = false;
 					$$ = (Node*)n;
@@ -10424,6 +10434,100 @@ any_with:	WITH									{}
 			| WITH_LA								{}
 		;
 
+opt_dictionary_map_no:
+			NO { $$ = true; }
+			| { $$ = false; }
+		;
+
+dictionary_config_comma:
+			dictionary_map_dict { $$ = $1; }
+			| dictionary_map_dict ',' dictionary_config_comma
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = TSMAP_OP_COMMA;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_action:
+			KEEP
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_KEEP;
+				n->data = NULL;
+				$$ = n;
+			}
+			| dictionary_config { $$ = $1; }
+		;
+
+opt_dictionary_map_case_else:
+			ELSE dictionary_config { $$ = $2; }
+			| { $$ = NULL; }
+		;
+
+dictionary_map_case:
+			CASE dictionary_config WHEN opt_dictionary_map_no MATCH THEN dictionary_map_action opt_dictionary_map_case_else END_P
+			{
+				DictMapCase *n = makeNode(DictMapCase);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->condition = $2;
+				n->command = $7;
+				n->elsebranch = $8;
+				n->match = !$4;
+
+				r->kind = DICT_MAP_CASE;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_set_expr_operator:
+			UNION { $$ = TSMAP_OP_UNION; }
+			| EXCEPT { $$ = TSMAP_OP_EXCEPT; }
+			| INTERSECT { $$ = TSMAP_OP_INTERSECT; }
+			| MAP { $$ = TSMAP_OP_MAP; }
+		;
+
+dictionary_config:
+			dictionary_map_command_expr_paren { $$ = $1; }
+			| dictionary_config dictionary_map_set_expr_operator dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = $2;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_command_expr_paren:
+			'(' dictionary_config ')'	{ $$ = $2; }
+			| dictionary_map_case			{ $$ = $1; }
+			| dictionary_config_comma		{ $$ = $1; }
+		;
+
+dictionary_map_dict:
+			any_name
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_DICTIONARY;
+				n->data = $1;
+				$$ = n;
+			}
+		;
 
 /*****************************************************************************
  *
@@ -15241,6 +15345,7 @@ unreserved_keyword:
 			| LOCK_P
 			| LOCKED
 			| LOGGED
+			| MAP
 			| MAPPING
 			| MATCH
 			| MATCHED
@@ -15549,6 +15654,7 @@ reserved_keyword:
 			| INITIALLY
 			| INTERSECT
 			| INTO
+			| KEEP
 			| LATERAL_P
 			| LEADING
 			| LIMIT
diff --git a/src/backend/tsearch/Makefile b/src/backend/tsearch/Makefile
index 227468ae9e..e61ad4fa1d 100644
--- a/src/backend/tsearch/Makefile
+++ b/src/backend/tsearch/Makefile
@@ -26,7 +26,7 @@ DICTFILES_PATH=$(addprefix dicts/,$(DICTFILES))
 OBJS = ts_locale.o ts_parse.o wparser.o wparser_def.o dict.o \
 	dict_simple.o dict_synonym.o dict_thesaurus.o \
 	dict_ispell.o regis.o spell.o \
-	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o
+	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o ts_configmap.o
 
 include $(top_srcdir)/src/backend/common.mk
 
diff --git a/src/backend/tsearch/ts_configmap.c b/src/backend/tsearch/ts_configmap.c
new file mode 100644
index 0000000000..714f2a8ab2
--- /dev/null
+++ b/src/backend/tsearch/ts_configmap.c
@@ -0,0 +1,1114 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.c
+ *		internal representation of text search configuration and utilities for it
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/tsearch/ts_confimap.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include <ctype.h>
+
+#include "access/heapam.h"
+#include "access/genam.h"
+#include "access/htup_details.h"
+#include "access/sysattr.h"
+#include "catalog/indexing.h"
+#include "catalog/pg_ts_dict.h"
+#include "catalog/pg_namespace.h"
+#include "catalog/namespace.h"
+#include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "utils/fmgroids.h"
+
+/*
+ * Size selected arbitrary, based on assumption that 1024 frames of stack
+ * is enough for parsing of configurations
+ */
+#define JSONB_PARSE_STATE_STACK_SIZE 1024
+
+/*
+ * Used during the parsing of TSMapElement from JSONB into internal
+ * data structures.
+ */
+typedef enum TSMapParseState
+{
+	TSMPS_WAIT_ELEMENT,
+	TSMPS_READ_DICT_OID,
+	TSMPS_READ_COMPLEX_OBJ,
+	TSMPS_READ_EXPRESSION,
+	TSMPS_READ_CASE,
+	TSMPS_READ_OPERATOR,
+	TSMPS_READ_COMMAND,
+	TSMPS_READ_CONDITION,
+	TSMPS_READ_ELSEBRANCH,
+	TSMPS_READ_MATCH,
+	TSMPS_READ_KEEP,
+	TSMPS_READ_LEFT,
+	TSMPS_READ_RIGHT
+} TSMapParseState;
+
+/*
+ * Context used during JSONB parsing to construct a TSMap
+ */
+typedef struct TSMapJsonbParseData
+{
+	TSMapParseState states[JSONB_PARSE_STATE_STACK_SIZE];	/* Stack of states of
+															 * JSONB parsing
+															 * automaton */
+	int			statesIndex;	/* Index of current stack frame */
+	TSMapElement *element;		/* Element that is in construction now */
+} TSMapJsonbParseData;
+
+static JsonbValue *TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState);
+static TSMapElement * JsonbToTSMapElement(JsonbContainer *root);
+
+/*
+ * Print name of the namespace into StringInfo variable result
+ */
+static void
+TSMapPrintNamespace(Oid  namespaceId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_namespace namespace;
+
+	if (false)
+		return;
+
+	maprel = heap_open(NamespaceRelationId, AccessShareLock);
+	mapidx = index_open(NamespaceOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(namespaceId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	namespace = (Form_pg_namespace) GETSTRUCT(maptup);
+	appendStringInfoString(result, namespace->nspname.data);
+	appendStringInfoChar(result, '.');
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print name of the dictionary into StringInfo variable result
+ */
+void
+TSMapPrintDictName(Oid dictId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_ts_dict dict;
+
+	if (false)
+		return;
+maprel = heap_open(TSDictionaryRelationId, AccessShareLock);
+	mapidx = index_open(TSDictionaryOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(dictId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	dict = (Form_pg_ts_dict) GETSTRUCT(maptup);
+	if (!TSDictionaryIsVisible(dictId))
+	{
+		TSMapPrintNamespace(dict->dictnamespace, result);
+	}
+	appendStringInfoString(result, dict->dictname.data);
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print the expression into StringInfo variable result
+ */
+static void
+TSMapPrintExpression(TSMapExpression *expression, StringInfo result)
+{
+
+	Assert(expression->left);
+	if (expression->left->type == TSMAP_EXPRESSION &&
+		expression->left->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, '(');
+	}
+	TSMapPrintElement(expression->left, result);
+	if (expression->left->type == TSMAP_EXPRESSION &&
+		expression->left->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, ')');
+	}
+
+	switch (expression->operator)
+	{
+		case TSMAP_OP_UNION:
+			appendStringInfoString(result, " UNION ");
+			break;
+		case TSMAP_OP_EXCEPT:
+			appendStringInfoString(result, " EXCEPT ");
+			break;
+		case TSMAP_OP_INTERSECT:
+			appendStringInfoString(result, " INTERSECT ");
+			break;
+		case TSMAP_OP_COMMA:
+			appendStringInfoString(result, ", ");
+			break;
+		case TSMAP_OP_MAP:
+			appendStringInfoString(result, " MAP ");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains invalid expression operator.")));
+			break;
+	}
+
+	Assert(expression->right);
+	if (expression->right->type == TSMAP_EXPRESSION &&
+		expression->right->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, '(');
+	}
+	TSMapPrintElement(expression->right, result);
+	if (expression->right->type == TSMAP_EXPRESSION &&
+		expression->right->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, ')');
+	}
+}
+
+/*
+ * Print the case configuration construction into StringInfo variable result
+ */
+static void
+TSMapPrintCase(TSMapCase *caseObject, StringInfo result)
+{
+	appendStringInfoString(result, "CASE ");
+
+	TSMapPrintElement(caseObject->condition, result);
+
+	appendStringInfoString(result, " WHEN ");
+	if (!caseObject->match)
+		appendStringInfoString(result, "NO ");
+	appendStringInfoString(result, "MATCH THEN ");
+
+	TSMapPrintElement(caseObject->command, result);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		appendStringInfoString(result, "\nELSE ");
+		TSMapPrintElement(caseObject->elsebranch, result);
+	}
+	appendStringInfoString(result, "\nEND");
+}
+
+/*
+ * Print the element into StringInfo result.
+ * Uses other function and serves for element type detection.
+ */
+void
+TSMapPrintElement(TSMapElement *element, StringInfo result)
+{
+	switch (element->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapPrintExpression(element->value.objectExpression, result);
+			break;
+		case TSMAP_DICTIONARY:
+			TSMapPrintDictName(element->value.objectDictionary, result);
+			break;
+		case TSMAP_CASE:
+			TSMapPrintCase(element->value.objectCase, result);
+			break;
+		case TSMAP_KEEP:
+			appendStringInfoString(result, "KEEP");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains elements with invalid type.")));
+			break;
+	}
+}
+
+/*
+ * Print the text search configuration as a text.
+ */
+Datum
+dictionary_mapping_to_text(PG_FUNCTION_ARGS)
+{
+	Oid			cfgOid = PG_GETARG_OID(0);
+	int32		tokentype = PG_GETARG_INT32(1);
+	StringInfo	rawResult;
+	text	   *result = NULL;
+	TSConfigCacheEntry *cacheEntry;
+
+	cacheEntry = lookup_ts_config_cache(cfgOid);
+	rawResult = makeStringInfo();
+	initStringInfo(rawResult);
+
+	if (cacheEntry->lenmap > tokentype && cacheEntry->map[tokentype] != NULL)
+	{
+		TSMapElement *element = cacheEntry->map[tokentype];
+
+		TSMapPrintElement(element, rawResult);
+	}
+
+	result = cstring_to_text(rawResult->data);
+	pfree(rawResult);
+	PG_RETURN_TEXT_P(result);
+}
+
+/* ----------------
+ * Functions used to convert TSMap structure into JSONB representation
+ * ----------------
+ */
+
+/*
+ * Convert an integer value into JsonbValue
+ */
+static JsonbValue *
+IntToJsonbValue(int intValue)
+{
+	char		buffer[16];
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	/*
+	 * String size is based on limit of int capacity up to 12 chars with sign
+	 * and NULL-character
+	 */
+	memset(buffer, 0, sizeof(char) * 12);
+
+	pg_ltoa(intValue, buffer);
+	value->type = jbvNumeric;
+	value->val.numeric = DatumGetNumeric(DirectFunctionCall3(numeric_in,
+															 CStringGetDatum(buffer),
+															 ObjectIdGetDatum(InvalidOid),
+															 Int32GetDatum(-1)
+															 ));
+	return value;
+}
+
+/*
+ * Convert a FTS configuration expression into JsonbValue
+ */
+static JsonbValue *
+TSMapExpressionToJsonbValue(TSMapExpression *expression, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("operator");
+	key.val.string.val = "operator";
+	value = IntToJsonbValue(expression->operator);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("left");
+	key.val.string.val = "left";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->left, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("right");
+	key.val.string.val = "right";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->right, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS configuration case into JsonbValue
+ */
+static JsonbValue *
+TSMapCaseToJsonbValue(TSMapCase *caseObject, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("condition");
+	key.val.string.val = "condition";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->condition, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("command");
+	key.val.string.val = "command";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->command, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		key.type = jbvString;
+		key.val.string.len = strlen("elsebranch");
+		key.val.string.val = "elsebranch";
+
+		pushJsonbValue(&jsonbState, WJB_KEY, &key);
+		value = TSMapElementToJsonbValue(caseObject->elsebranch, jsonbState);
+
+		if (value && IsAJsonbScalar(value))
+			pushJsonbValue(&jsonbState, WJB_VALUE, value);
+	}
+
+	key.type = jbvString;
+	key.val.string.len = strlen("match");
+	key.val.string.val = "match";
+
+	value = IntToJsonbValue(caseObject->match ? 1 : 0);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS KEEP command into JsonbValue
+ */
+static JsonbValue *
+TSMapKeepToJsonbValue(JsonbParseState *jsonbState)
+{
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	value->type = jbvString;
+	value->val.string.len = strlen("keep");
+	value->val.string.val = "keep";
+
+	return pushJsonbValue(&jsonbState, WJB_VALUE, value);
+}
+
+/*
+ * Convert a FTS element into JsonbValue. Common point for all types of TSMapElement
+ */
+JsonbValue *
+TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState)
+{
+	JsonbValue *result = NULL;
+
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_EXPRESSION:
+				result = TSMapExpressionToJsonbValue(element->value.objectExpression, jsonbState);
+				break;
+			case TSMAP_DICTIONARY:
+				result = IntToJsonbValue(element->value.objectDictionary);
+				break;
+			case TSMAP_CASE:
+				result = TSMapCaseToJsonbValue(element->value.objectCase, jsonbState);
+				break;
+			case TSMAP_KEEP:
+				result = TSMapKeepToJsonbValue(jsonbState);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Required text search configuration contains elements with invalid type.")));
+				break;
+		}
+	}
+	return result;
+}
+
+/*
+ * Convert a FTS configuration into JSONB
+ */
+Jsonb *
+TSMapToJsonb(TSMapElement *element)
+{
+	JsonbParseState *jsonbState = NULL;
+	JsonbValue *out;
+	Jsonb	   *result;
+
+	out = TSMapElementToJsonbValue(element, jsonbState);
+
+	result = JsonbValueToJsonb(out);
+	return result;
+}
+
+/* ----------------
+ * Functions used to get TSMap structure from JSONB representation
+ * ----------------
+ */
+
+/*
+ * Extract an integer from JsonbValue
+ */
+static int
+JsonbValueToInt(JsonbValue *value)
+{
+	char	   *str;
+
+	str = DatumGetCString(DirectFunctionCall1(numeric_out, NumericGetDatum(value->val.numeric)));
+	return pg_atoi(str, sizeof(int), 0);
+}
+
+/*
+ * Check is a key one of FTS configuration case fields
+ */
+static bool
+IsTSMapCaseKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "match") == 0 || strcmp(key, "condition") == 0 || strcmp(key, "command") == 0 || strcmp(key, "elsebranch") == 0;
+}
+
+/*
+ * Check is a key one of FTS configuration expression fields
+ */
+static bool
+IsTSMapExpressionKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "operator") == 0 || strcmp(key, "left") == 0 || strcmp(key, "right") == 0;
+}
+
+/*
+ * Configure parseData->element according to value (key)
+ */
+static void
+JsonbBeginObjectKey(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *parentElement = parseData->element;
+
+	parseData->element = palloc0(sizeof(TSMapElement));
+	parseData->element->parent = parentElement;
+
+	/* Overwrite object-type state based on key */
+	if (IsTSMapExpressionKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_EXPRESSION;
+		parseData->element->type = TSMAP_EXPRESSION;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapExpression));
+	}
+	else if (IsTSMapCaseKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CASE;
+		parseData->element->type = TSMAP_CASE;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapCase));
+	}
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration expression
+ */
+static void
+JsonbKeyExpressionProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "operator") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_OPERATOR;
+	else if (strcmp(key, "left") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_LEFT;
+	else if (strcmp(key, "right") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_RIGHT;
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration case
+ */
+static void
+JsonbKeyCaseProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "condition") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CONDITION;
+	else if (strcmp(key, "command") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_COMMAND;
+	else if (strcmp(key, "elsebranch") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_ELSEBRANCH;
+	else if (strcmp(key, "match") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_MATCH;
+}
+
+/*
+ * Convert a JsonbValue into OID TSMapElement
+ */
+static TSMapElement *
+JsonbValueToOidElement(JsonbValue *value, TSMapElement *parent)
+{
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	element->type = TSMAP_DICTIONARY;
+	element->value.objectDictionary = JsonbValueToInt(value);
+	return element;
+}
+
+/*
+ * Convert a JsonbValue into string TSMapElement.
+ * Used for special values such as KEEP command
+ */
+static TSMapElement *
+JsonbValueReadString(JsonbValue *value, TSMapElement *parent)
+{
+	char	   *str;
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	str = palloc0(sizeof(char) * (value->val.string.len + 1));
+	memcpy(str, value->val.string.val, sizeof(char) * value->val.string.len);
+
+	if (strcmp(str, "keep") == 0)
+		element->type = TSMAP_KEEP;
+
+	pfree(str);
+
+	return element;
+}
+
+/*
+ * Process a JsonbValue object
+ */
+static void
+JsonbProcessElement(JsonbIteratorToken r, JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *element = NULL;
+
+	switch (r)
+	{
+		case WJB_KEY:
+
+			/*
+			 * Construct an TSMapElement object. At first key inside JSONB
+			 * object a type is selected based on key.
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMPLEX_OBJ)
+				JsonbBeginObjectKey(value, parseData);
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_EXPRESSION)
+				JsonbKeyExpressionProcessing(value, parseData);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CASE)
+				JsonbKeyCaseProcessing(value, parseData);
+
+			break;
+		case WJB_BEGIN_OBJECT:
+
+			/*
+			 * Begin construction of new object
+			 */
+			parseData->statesIndex++;
+			parseData->states[parseData->statesIndex] = TSMPS_READ_COMPLEX_OBJ;
+			break;
+		case WJB_END_OBJECT:
+
+			/*
+			 * Save constructed object based on current state of parser
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->parent->value.objectExpression->left = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->parent->value.objectExpression->right = parseData->element;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->parent->value.objectCase->condition = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->parent->value.objectCase->command = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->parent->value.objectCase->elsebranch = parseData->element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_VALUE:
+
+			/*
+			 * Save a value inside constructing object
+			 */
+			if (value.type == jbvBinary)
+				element = JsonbToTSMapElement(value.val.binary.data);
+			else if (value.type == jbvString)
+				element = JsonbValueReadString(&value, parseData->element);
+			else if (value.type == jbvNumeric)
+				element = JsonbValueToOidElement(&value, parseData->element);
+			else
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains object with invalid type.")));
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->value.objectCase->condition = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->value.objectCase->command = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->value.objectCase->elsebranch = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_MATCH)
+				parseData->element->value.objectCase->match = JsonbValueToInt(&value) == 1 ? true : false;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_OPERATOR)
+				parseData->element->value.objectExpression->operator = JsonbValueToInt(&value);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->value.objectExpression->left = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->value.objectExpression->right = element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_ELEM:
+
+			/*
+			 * Store a simple element such as dictionary OID
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_WAIT_ELEMENT)
+			{
+				if (parseData->element != NULL)
+					parseData->element = JsonbValueToOidElement(&value, parseData->element->parent);
+				else
+					parseData->element = JsonbValueToOidElement(&value, NULL);
+			}
+			break;
+		default:
+			/* Ignore unused JSONB tokens */
+			break;
+	}
+}
+
+/*
+ * Convert a JsonbContainer into TSMapElement
+ */
+static TSMapElement *
+JsonbToTSMapElement(JsonbContainer *root)
+{
+	TSMapJsonbParseData parseData;
+	JsonbIteratorToken r;
+	JsonbIterator *it;
+	JsonbValue	val;
+
+	parseData.statesIndex = 0;
+	parseData.states[parseData.statesIndex] = TSMPS_WAIT_ELEMENT;
+	parseData.element = NULL;
+
+	it = JsonbIteratorInit(root);
+
+	while ((r = JsonbIteratorNext(&it, &val, true)) != WJB_DONE)
+		JsonbProcessElement(r, val, &parseData);
+
+	return parseData.element;
+}
+
+/*
+ * Convert a JSONB into TSMapElement
+ */
+TSMapElement *
+JsonbToTSMap(Jsonb *json)
+{
+	JsonbContainer *root = &json->root;
+
+	return JsonbToTSMapElement(root);
+}
+
+/* ----------------
+ * Text Search Configuration Map Utils
+ * ----------------
+ */
+
+/*
+ * Dynamically extendable list of OIDs
+ */
+typedef struct OidList
+{
+	Oid		   *data;
+	int			size;			/* Size of data array. Uninitialized elements
+								 * in data filled with InvalidOid */
+} OidList;
+
+/*
+ * Initialize a list
+ */
+static OidList *
+OidListInit()
+{
+	OidList    *result = palloc0(sizeof(OidList));
+
+	result->size = 1;
+	result->data = palloc0(result->size * sizeof(Oid));
+	result->data[0] = InvalidOid;
+	return result;
+}
+
+/*
+ * Add a new OID into list. If it is already stored in list, it won't be add second time.
+ */
+static void
+OidListAdd(OidList *list, Oid oid)
+{
+	int			i;
+
+	/* Search for the Oid in the list */
+	for (i = 0; list->data[i] != InvalidOid; i++)
+		if (list->data[i] == oid)
+			return;
+
+	/* If not found, insert it in the end of the list */
+	if (i >= list->size - 1)
+	{
+		int			j;
+
+		list->size = list->size * 2;
+		list->data = repalloc(list->data, sizeof(Oid) * list->size);
+
+		for (j = i; j < list->size; j++)
+			list->data[j] = InvalidOid;
+	}
+	list->data[i] = oid;
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement.
+ * Used for internal recursive calls.
+ */
+static void
+TSMapGetDictionariesInternal(TSMapElement *config, OidList *list)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapGetDictionariesInternal(config->value.objectExpression->left, list);
+			TSMapGetDictionariesInternal(config->value.objectExpression->right, list);
+			break;
+		case TSMAP_CASE:
+			TSMapGetDictionariesInternal(config->value.objectCase->command, list);
+			TSMapGetDictionariesInternal(config->value.objectCase->condition, list);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapGetDictionariesInternal(config->value.objectCase->elsebranch, list);
+			break;
+		case TSMAP_DICTIONARY:
+			OidListAdd(list, config->value.objectDictionary);
+			break;
+	}
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement
+ */
+Oid *
+TSMapGetDictionaries(TSMapElement *config)
+{
+	Oid		   *result;
+	OidList    *list = OidListInit();
+
+	TSMapGetDictionariesInternal(config, list);
+
+	result = list->data;
+	pfree(list);
+
+	return result;
+}
+
+/*
+ * Replace one dictionary OID with another in all instances inside a configuration
+ */
+void
+TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapReplaceDictionary(config->value.objectExpression->left, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectExpression->right, oldDict, newDict);
+			break;
+		case TSMAP_CASE:
+			TSMapReplaceDictionary(config->value.objectCase->command, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectCase->condition, oldDict, newDict);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapReplaceDictionary(config->value.objectCase->elsebranch, oldDict, newDict);
+			break;
+		case TSMAP_DICTIONARY:
+			if (config->value.objectDictionary == oldDict)
+				config->value.objectDictionary = newDict;
+			break;
+	}
+}
+
+/* ----------------
+ * Text Search Configuration Map Memory Management
+ * ----------------
+ */
+
+/*
+ * Move a FTS configuration expression to another memory context
+ */
+static TSMapElement *
+TSMapExpressionMoveToMemoryContext(TSMapExpression *expression, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapExpression *resultExpression = MemoryContextAlloc(context, sizeof(TSMapExpression));
+
+	memset(resultExpression, 0, sizeof(TSMapExpression));
+	result->value.objectExpression = resultExpression;
+	result->type = TSMAP_EXPRESSION;
+
+	resultExpression->operator = expression->operator;
+
+	resultExpression->left = TSMapMoveToMemoryContext(expression->left, context);
+	resultExpression->left->parent = result;
+
+	resultExpression->right = TSMapMoveToMemoryContext(expression->right, context);
+	resultExpression->right->parent = result;
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration case to another memory context
+ */
+static TSMapElement *
+TSMapCaseMoveToMemoryContext(TSMapCase *caseObject, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapCase  *resultCaseObject = MemoryContextAlloc(context, sizeof(TSMapCase));
+
+	memset(resultCaseObject, 0, sizeof(TSMapCase));
+	result->value.objectCase = resultCaseObject;
+	result->type = TSMAP_CASE;
+
+	resultCaseObject->match = caseObject->match;
+
+	resultCaseObject->command = TSMapMoveToMemoryContext(caseObject->command, context);
+	resultCaseObject->command->parent = result;
+
+	resultCaseObject->condition = TSMapMoveToMemoryContext(caseObject->condition, context);
+	resultCaseObject->condition->parent = result;
+
+	if (caseObject->elsebranch != NULL)
+	{
+		resultCaseObject->elsebranch = TSMapMoveToMemoryContext(caseObject->elsebranch, context);
+		resultCaseObject->elsebranch->parent = result;
+	}
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration to another memory context
+ */
+TSMapElement *
+TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context)
+{
+	TSMapElement *result = NULL;
+
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			result = TSMapExpressionMoveToMemoryContext(config->value.objectExpression, context);
+			break;
+		case TSMAP_CASE:
+			result = TSMapCaseMoveToMemoryContext(config->value.objectCase, context);
+			break;
+		case TSMAP_DICTIONARY:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_DICTIONARY;
+			result->value.objectDictionary = config->value.objectDictionary;
+			break;
+		case TSMAP_KEEP:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_KEEP;
+			result->value.object = NULL;
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains object with invalid type.")));
+			break;
+	}
+
+	return result;
+}
+
+/*
+ * Free memory occupied by FTS configuration expression
+ */
+static void
+TSMapExpressionFree(TSMapExpression *expression)
+{
+	if (expression->left)
+		TSMapElementFree(expression->left);
+	if (expression->right)
+		TSMapElementFree(expression->right);
+	pfree(expression);
+}
+
+/*
+ * Free memory occupied by FTS configuration case
+ */
+static void
+TSMapCaseFree(TSMapCase *caseObject)
+{
+	TSMapElementFree(caseObject->condition);
+	TSMapElementFree(caseObject->command);
+	TSMapElementFree(caseObject->elsebranch);
+	pfree(caseObject);
+}
+
+/*
+ * Free memory occupied by FTS configuration element
+ */
+void
+TSMapElementFree(TSMapElement *element)
+{
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_CASE:
+				TSMapCaseFree(element->value.objectCase);
+				break;
+			case TSMAP_EXPRESSION:
+				TSMapExpressionFree(element->value.objectExpression);
+				break;
+		}
+		pfree(element);
+	}
+}
+
+/*
+ * Do a deep comparison of two TSMapElements. Doesn't check parents of elements
+ */
+bool
+TSMapElementEquals(TSMapElement *a, TSMapElement *b)
+{
+	bool		result = true;
+
+	if (a->type == b->type)
+	{
+		switch (a->type)
+		{
+			case TSMAP_CASE:
+				if (!TSMapElementEquals(a->value.objectCase->condition, b->value.objectCase->condition))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectCase->command, b->value.objectCase->command))
+					result = false;
+
+				if (a->value.objectCase->elsebranch != NULL && b->value.objectCase->elsebranch != NULL)
+				{
+					if (!TSMapElementEquals(a->value.objectCase->elsebranch, b->value.objectCase->elsebranch))
+						result = false;
+				}
+				else if (a->value.objectCase->elsebranch != NULL || b->value.objectCase->elsebranch != NULL)
+					result = false;
+
+				if (a->value.objectCase->match != b->value.objectCase->match)
+					result = false;
+				break;
+			case TSMAP_EXPRESSION:
+				if (!TSMapElementEquals(a->value.objectExpression->left, b->value.objectExpression->left))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectExpression->right, b->value.objectExpression->right))
+					result = false;
+				if (a->value.objectExpression->operator != b->value.objectExpression->operator)
+					result = false;
+				break;
+			case TSMAP_DICTIONARY:
+				result = a->value.objectDictionary == b->value.objectDictionary;
+				break;
+			case TSMAP_KEEP:
+				result = true;
+		}
+	}
+	else
+		result = false;
+
+	return result;
+}
diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c
index 7b69ef5660..f476abb323 100644
--- a/src/backend/tsearch/ts_parse.c
+++ b/src/backend/tsearch/ts_parse.c
@@ -16,58 +16,157 @@
 
 #include "tsearch/ts_cache.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "funcapi.h"
 
 #define IGNORE_LONGLEXEME	1
 
-/*
+/*-------------------
  * Lexize subsystem
+ *-------------------
  */
 
+/*
+ * Representation of token produced by FTS parser. It contains intermediate
+ * lexemes in case of phrase dictionary processing.
+ */
 typedef struct ParsedLex
 {
-	int			type;
-	char	   *lemm;
-	int			lenlemm;
-	struct ParsedLex *next;
+	int			type;			/* Token type */
+	char	   *lemm;			/* Token itself */
+	int			lenlemm;		/* Length of the token string */
+	int			maplen;			/* Length of the map */
+	bool	   *accepted;		/* Is accepted by some dictionary */
+	bool	   *rejected;		/* Is rejected by all dictionaries */
+	bool	   *notFinished;	/* Some dictionary not finished processing and
+								 * waits for more tokens */
+	struct ParsedLex *next;		/* Next token in the list */
+	TSMapElement *relatedRule;	/* Rule which is used to produce lexemes from
+								 * the token */
 } ParsedLex;
 
+/*
+ * List of tokens produced by FTS parser.
+ */
 typedef struct ListParsedLex
 {
 	ParsedLex  *head;
 	ParsedLex  *tail;
 } ListParsedLex;
 
-typedef struct
+/*
+ * Dictionary state shared between processing of different tokens
+ */
+typedef struct DictState
 {
-	TSConfigCacheEntry *cfg;
-	Oid			curDictId;
-	int			posDict;
-	DictSubState dictState;
-	ParsedLex  *curSub;
-	ListParsedLex towork;		/* current list to work */
-	ListParsedLex waste;		/* list of lexemes that already lexized */
+	Oid			relatedDictionary;	/* DictState contains state of dictionary
+									 * with this Oid */
+	DictSubState subState;		/* Internal state of the dictionary used to
+								 * store some state between dictionary calls */
+	ListParsedLex acceptedTokens;	/* Tokens which are processed and
+									 * accepted, used in last returned result
+									 * by the dictionary */
+	ListParsedLex intermediateTokens;	/* Tokens which are not accepted, but
+										 * were processed by thesaurus-like
+										 * dictionary */
+	bool		storeToAccepted;	/* Should current token be appended to
+									 * accepted or intermediate tokens */
+	bool		processed;		/* Is the dictionary take control during
+								 * current token processing */
+	TSLexeme   *tmpResult;		/* Last result returned by thesaurus-like
+								 * dictionary, if dictionary still waiting for
+								 * more lexemes */
+} DictState;
 
-	/*
-	 * fields to store last variant to lexize (basically, thesaurus or similar
-	 * to, which wants	several lexemes
-	 */
+/*
+ * List of dictionary states
+ */
+typedef struct DictStateList
+{
+	int			listLength;
+	DictState  *states;
+} DictStateList;
 
-	ParsedLex  *lastRes;
-	TSLexeme   *tmpRes;
+/*
+ * Buffer entry with lexemes produced from current token
+ */
+typedef struct LexemesBufferEntry
+{
+	TSMapElement *key;	/* Element of the mapping configuration produced the entry */
+	ParsedLex  *token;	/* Token used for production of the lexemes */
+	TSLexeme   *data;	/* Lexemes produced from current token */
+} LexemesBufferEntry;
+
+/*
+ * Buffer with lexemes produced from current token
+ */
+typedef struct LexemesBuffer
+{
+	int			size;
+	LexemesBufferEntry *data;
+} LexemesBuffer;
+
+/*
+ * Storage for accepted and possible accepted lexemes
+ */
+typedef struct ResultStorage
+{
+	TSLexeme   *lexemes;		/* Processed lexemes, which is not yet
+								 * accepted */
+	TSLexeme   *accepted;		/* Already accepted lexemes */
+} ResultStorage;
+
+/*
+ * FTS processing context
+ */
+typedef struct LexizeData
+{
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	DictStateList dslist;		/* List of all currently stored states of
+								 * dictionaries */
+	ListParsedLex towork;		/* Current list to work */
+	ListParsedLex waste;		/* List of lexemes that already lexized */
+	LexemesBuffer buffer;		/* Buffer of processed lexemes. Used to avoid
+								 * multiple execution of token lexize process
+								 * with same parameters */
+	ResultStorage delayedResults;	/* Results that should be returned but may
+									 * be rejected in future */
+	Oid			skipDictionary; /* The dictionary we should skip during
+								 * processing. Used to avoid infinite loop in
+								 * configuration with phrase dictionary */
+	bool		debugContext;	/* If true, relatedRule attribute is filled */
 } LexizeData;
 
-static void
-LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+/*
+ * FTS processing debug context. Used during ts_debug calls.
+ */
+typedef struct TSDebugContext
 {
-	ld->cfg = cfg;
-	ld->curDictId = InvalidOid;
-	ld->posDict = 0;
-	ld->towork.head = ld->towork.tail = ld->curSub = NULL;
-	ld->waste.head = ld->waste.tail = NULL;
-	ld->lastRes = NULL;
-	ld->tmpRes = NULL;
-}
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	TSParserCacheEntry *prsobj; /* Parser context of current ts_debug context */
+	LexDescr   *tokenTypes;		/* Token types supported by current parser */
+	void	   *prsdata;		/* Parser data of current ts_debug context */
+	LexizeData	ldata;			/* Lexize data of current ts_debug context */
+	int			tokentype;		/* Last token tokentype */
+	TSLexeme   *savedLexemes;	/* Last token lexemes stored for ts_debug
+								 * output */
+	ParsedLex  *leftTokens;		/* Corresponded ParsedLex */
+} TSDebugContext;
+
+static TSLexeme *TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression);
+static TSLexeme *LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config);
+
+/*-------------------
+ * ListParsedLex API
+ *-------------------
+ */
 
+/*
+ * Add a ParsedLex to the end of the list
+ */
 static void
 LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 {
@@ -81,274 +180,1291 @@ LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 	newpl->next = NULL;
 }
 
-static ParsedLex *
-LPLRemoveHead(ListParsedLex *list)
-{
-	ParsedLex  *res = list->head;
+/*
+ * Add a copy of ParsedLex to the end of the list
+ */
+static void
+LPLAddTailCopy(ListParsedLex *list, ParsedLex *newpl)
+{
+	ParsedLex  *copy = palloc0(sizeof(ParsedLex));
+
+	copy->lenlemm = newpl->lenlemm;
+	copy->type = newpl->type;
+	copy->lemm = newpl->lemm;
+	copy->relatedRule = newpl->relatedRule;
+	copy->next = NULL;
+
+	if (list->tail)
+	{
+		list->tail->next = copy;
+		list->tail = copy;
+	}
+	else
+		list->head = list->tail = copy;
+}
+
+/*
+ * Remove the head of the list. Return pointer to detached head
+ */
+static ParsedLex *
+LPLRemoveHead(ListParsedLex *list)
+{
+	ParsedLex  *res = list->head;
+
+	if (list->head)
+		list->head = list->head->next;
+
+	if (list->head == NULL)
+		list->tail = NULL;
+
+	return res;
+}
+
+/*
+ * Remove all ParsedLex from the list
+ */
+static void
+LPLClear(ListParsedLex *list)
+{
+	ParsedLex  *tmp,
+			   *ptr = list->head;
+
+	while (ptr)
+	{
+		tmp = ptr->next;
+		pfree(ptr);
+		ptr = tmp;
+	}
+
+	list->head = list->tail = NULL;
+}
+
+/*-------------------
+ * LexizeData manipulation functions
+ *-------------------
+ */
+
+/*
+ * Initialize empty LexizeData object
+ */
+static void
+LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+{
+	ld->cfg = cfg;
+	ld->skipDictionary = InvalidOid;
+	ld->towork.head = ld->towork.tail = NULL;
+	ld->waste.head = ld->waste.tail = NULL;
+	ld->dslist.listLength = 0;
+	ld->dslist.states = NULL;
+	ld->buffer.size = 0;
+	ld->buffer.data = NULL;
+	ld->delayedResults.lexemes = NULL;
+	ld->delayedResults.accepted = NULL;
+}
+
+/*
+ * Add a token to the processing queue
+ */
+static void
+LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
+{
+	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+
+	newpl->type = type;
+	newpl->lemm = lemm;
+	newpl->lenlemm = lenlemm;
+	newpl->relatedRule = NULL;
+	LPLAddTail(&ld->towork, newpl);
+}
+
+/*
+ * Remove head of the processing queue
+ */
+static void
+RemoveHead(LexizeData *ld)
+{
+	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+}
+
+/*
+ * Set token corresponded to current lexeme
+ */
+static void
+setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+{
+	if (correspondLexem)
+		*correspondLexem = ld->waste.head;
+	else
+		LPLClear(&ld->waste);
+
+	ld->waste.head = ld->waste.tail = NULL;
+}
+
+/*-------------------
+ * DictState manipulation functions
+ *-------------------
+ */
+
+/*
+ * Get a state of dictionary based on its OID
+ */
+static DictState *
+DictStateListGet(DictStateList *list, Oid dictId)
+{
+	int			i;
+	DictState  *result = NULL;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			result = &list->states[i];
+
+	return result;
+}
+
+/*
+ * Remove a state of dictionary based on its OID
+ */
+static void
+DictStateListRemove(DictStateList *list, Oid dictId)
+{
+	int			i;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			break;
+
+	if (i != list->listLength)
+	{
+		memcpy(list->states + i, list->states + i + 1, sizeof(DictState) * (list->listLength - i - 1));
+		list->listLength--;
+		if (list->listLength == 0)
+			list->states = NULL;
+		else
+			list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	}
+}
+
+/*
+ * Insert a state of dictionary with specified OID
+ */
+static DictState *
+DictStateListAdd(DictStateList *list, DictState *state)
+{
+	DictStateListRemove(list, state->relatedDictionary);
+
+	list->listLength++;
+	if (list->states)
+		list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	else
+		list->states = palloc0(sizeof(DictState) * list->listLength);
+
+	memcpy(list->states + list->listLength - 1, state, sizeof(DictState));
+
+	return list->states + list->listLength - 1;
+}
+
+/*
+ * Remove states of all dictionaries
+ */
+static void
+DictStateListClear(DictStateList *list)
+{
+	list->listLength = 0;
+	if (list->states)
+		pfree(list->states);
+	list->states = NULL;
+}
+
+/*-------------------
+ * LexemesBuffer manipulation functions
+ *-------------------
+ */
+
+/*
+ * Check if there is a saved lexeme generated by specified TSMapElement
+ */
+static bool
+LexemesBufferContains(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			return true;
+
+	return false;
+}
+
+/*
+ * Get a saved lexeme generated by specified TSMapElement
+ */
+static TSLexeme *
+LexemesBufferGet(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+	TSLexeme   *result = NULL;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			result = buffer->data[i].data;
+
+	return result;
+}
+
+/*
+ * Remove a saved lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferRemove(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			break;
+
+	if (i != buffer->size)
+	{
+		memcpy(buffer->data + i, buffer->data + i + 1, sizeof(LexemesBufferEntry) * (buffer->size - i - 1));
+		buffer->size--;
+		if (buffer->size == 0)
+			buffer->data = NULL;
+		else
+			buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	}
+}
+
+/*
+ * Same a lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferAdd(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token, TSLexeme *data)
+{
+	LexemesBufferRemove(buffer, key, token);
+
+	buffer->size++;
+	if (buffer->data)
+		buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	else
+		buffer->data = palloc0(sizeof(LexemesBufferEntry) * buffer->size);
+
+	buffer->data[buffer->size - 1].token = token;
+	buffer->data[buffer->size - 1].key = key;
+	buffer->data[buffer->size - 1].data = data;
+}
+
+/*
+ * Remove all lexemes saved in a buffer
+ */
+static void
+LexemesBufferClear(LexemesBuffer *buffer)
+{
+	int			i;
+	bool	   *skipEntry = palloc0(sizeof(bool) * buffer->size);
+
+	for (i = 0; i < buffer->size; i++)
+	{
+		if (buffer->data[i].data != NULL && !skipEntry[i])
+		{
+			int			j;
+
+			for (j = 0; j < buffer->size; j++)
+				if (buffer->data[i].data == buffer->data[j].data)
+					skipEntry[j] = true;
+
+			pfree(buffer->data[i].data);
+		}
+	}
+
+	buffer->size = 0;
+	if (buffer->data)
+		pfree(buffer->data);
+	buffer->data = NULL;
+}
+
+/*-------------------
+ * TSLexeme util functions
+ *-------------------
+ */
+
+/*
+ * Get size of TSLexeme except empty-lexeme
+ */
+static int
+TSLexemeGetSize(TSLexeme *lex)
+{
+	int			result = 0;
+	TSLexeme   *ptr = lex;
+
+	while (ptr && ptr->lexeme)
+	{
+		result++;
+		ptr++;
+	}
+
+	return result;
+}
+
+/*
+ * Remove repeated lexemes. Also remove copies of whole nvariant groups.
+ */
+static TSLexeme *
+TSLexemeRemoveDuplications(TSLexeme *lexeme)
+{
+	TSLexeme   *res;
+	int			curLexIndex;
+	int			i;
+	int			lexemeSize = TSLexemeGetSize(lexeme);
+	int			shouldCopyCount = lexemeSize;
+	bool	   *shouldCopy;
+
+	if (lexeme == NULL)
+		return NULL;
+
+	shouldCopy = palloc(sizeof(bool) * lexemeSize);
+	memset(shouldCopy, true, sizeof(bool) * lexemeSize);
+
+	for (curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		for (i = curLexIndex + 1; i < lexemeSize; i++)
+		{
+			if (!shouldCopy[i])
+				continue;
+
+			if (strcmp(lexeme[curLexIndex].lexeme, lexeme[i].lexeme) == 0)
+			{
+				if (lexeme[curLexIndex].nvariant == lexeme[i].nvariant)
+				{
+					shouldCopy[i] = false;
+					shouldCopyCount--;
+					continue;
+				}
+				else
+				{
+					/*
+					 * Check for same set of lexemes in another nvariant
+					 * series
+					 */
+					int			nvariantCountL = 0;
+					int			nvariantCountR = 0;
+					int			nvariantOverlap = 1;
+					int			j;
+
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[curLexIndex].nvariant == lexeme[j].nvariant)
+							nvariantCountL++;
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[i].nvariant == lexeme[j].nvariant)
+							nvariantCountR++;
+
+					if (nvariantCountL != nvariantCountR)
+						continue;
+
+					for (j = 1; j < nvariantCountR; j++)
+					{
+						if (strcmp(lexeme[curLexIndex + j].lexeme, lexeme[i + j].lexeme) == 0
+							&& lexeme[curLexIndex + j].nvariant == lexeme[i + j].nvariant)
+							nvariantOverlap++;
+					}
+
+					if (nvariantOverlap != nvariantCountR)
+						continue;
+
+					for (j = 0; j < nvariantCountR; j++)
+						shouldCopy[i + j] = false;
+				}
+			}
+		}
+	}
+
+	res = palloc0(sizeof(TSLexeme) * (shouldCopyCount + 1));
+
+	for (i = 0, curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		if (shouldCopy[curLexIndex])
+		{
+			memcpy(res + i, lexeme + curLexIndex, sizeof(TSLexeme));
+			i++;
+		}
+	}
+
+	pfree(shouldCopy);
+	return res;
+}
+
+/*
+ * Combine two lexeme lists with respect to positions
+ */
+static TSLexeme *
+TSLexemeMergePositions(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+
+	if (left != NULL || right != NULL)
+	{
+		int			left_i = 0;
+		int			right_i = 0;
+		int			left_max_nvariant = 0;
+		int			i;
+		int			left_size = TSLexemeGetSize(left);
+		int			right_size = TSLexemeGetSize(right);
+
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		for (i = 0; i < right_size; i++)
+			right[i].nvariant += left_max_nvariant;
+		if (right && right[0].flags & TSL_ADDPOS)
+			right[0].flags &= ~TSL_ADDPOS;
+
+		i = 0;
+		while (i < left_size + right_size)
+		{
+			if (left_i < left_size)
+			{
+				do
+				{
+					result[i++] = left[left_i++];
+				} while (left && left[left_i].lexeme && (left[left_i].flags & TSL_ADDPOS) == 0);
+			}
+
+			if (right_i < right_size)
+			{
+				do
+				{
+					result[i++] = right[right_i++];
+				} while (right && right[right_i].lexeme && (right[right_i].flags & TSL_ADDPOS) == 0);
+			}
+		}
+	}
+	return result;
+}
+
+/*
+ * Split lexemes generated by regular dictionaries and multi-input dictionaries
+ * and combine them with respect to positions
+ */
+static TSLexeme *
+TSLexemeFilterMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *result;
+	TSLexeme   *ptr = lexemes;
+	int			multi_lexemes = 0;
+
+	while (ptr && ptr->lexeme)
+	{
+		if (ptr->flags & TSL_MULTI)
+			multi_lexemes++;
+		ptr++;
+	}
+
+	if (multi_lexemes > 0)
+	{
+		TSLexeme   *lexemes_multi = palloc0(sizeof(TSLexeme) * (multi_lexemes + 1));
+		TSLexeme   *lexemes_rest = palloc0(sizeof(TSLexeme) * (TSLexemeGetSize(lexemes) - multi_lexemes + 1));
+		int			rest_i = 0;
+		int			multi_i = 0;
+
+		ptr = lexemes;
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr->flags & TSL_MULTI)
+				lexemes_multi[multi_i++] = *ptr;
+			else
+				lexemes_rest[rest_i++] = *ptr;
+
+			ptr++;
+		}
+		result = TSLexemeMergePositions(lexemes_rest, lexemes_multi);
+	}
+	else
+	{
+		result = TSLexemeMergePositions(lexemes, NULL);
+	}
+
+	return result;
+}
+
+/*
+ * Mark lexemes as generated by multi-input (thesaurus-like) dictionary
+ */
+static void
+TSLexemeMarkMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *ptr = lexemes;
+
+	while (ptr && ptr->lexeme)
+	{
+		ptr->flags |= TSL_MULTI;
+		ptr++;
+	}
+}
+
+/*-------------------
+ * Lexemes set operations
+ *-------------------
+ */
+
+/*
+ * Combine left and right lexeme lists into one.
+ * If append is true, right lexemes added after last left lexeme with TSL_ADDPOS flag
+ */
+static TSLexeme *
+TSLexemeUnionOpt(TSLexeme *left, TSLexeme *right, bool append)
+{
+	TSLexeme   *result;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+	int			left_max_nvariant = 0;
+	int			i;
+
+	if (left == NULL && right == NULL)
+	{
+		result = NULL;
+	}
+	else
+	{
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		if (left_size > 0)
+			memcpy(result, left, sizeof(TSLexeme) * left_size);
+		if (right_size > 0)
+			memcpy(result + left_size, right, sizeof(TSLexeme) * right_size);
+		if (append && left_size > 0 && right_size > 0)
+			result[left_size].flags |= TSL_ADDPOS;
+
+		for (i = left_size; i < left_size + right_size; i++)
+			result[i].nvariant += left_max_nvariant;
+	}
+
+	return result;
+}
+
+/*
+ * Combine left and right lexeme lists into one
+ */
+static TSLexeme *
+TSLexemeUnion(TSLexeme *left, TSLexeme *right)
+{
+	return TSLexemeUnionOpt(left, right, false);
+}
+
+/*
+ * Remove common lexemes and return only which is stored in left list
+ */
+static TSLexeme *
+TSLexemeExcept(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (!found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*
+ * Keep only common lexemes
+ */
+static TSLexeme *
+TSLexemeIntersect(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*-------------------
+ * Result storage functions
+ *-------------------
+ */
+
+/*
+ * Add a lexeme to the result storage
+ */
+static void
+ResultStorageAdd(ResultStorage *storage, ParsedLex *token, TSLexeme *lexs)
+{
+	TSLexeme   *oldLexs = storage->lexemes;
+
+	storage->lexemes = TSLexemeUnionOpt(storage->lexemes, lexs, true);
+	if (oldLexs)
+		pfree(oldLexs);
+}
+
+/*
+ * Move all saved lexemes to accepted list
+ */
+static void
+ResultStorageMoveToAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+	{
+		TSLexeme   *prevAccepted = storage->accepted;
+
+		storage->accepted = TSLexemeUnionOpt(storage->accepted, storage->lexemes, true);
+		if (prevAccepted)
+			pfree(prevAccepted);
+		if (storage->lexemes)
+			pfree(storage->lexemes);
+	}
+	else
+	{
+		storage->accepted = storage->lexemes;
+	}
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all non-accepted lexemes
+ */
+static void
+ResultStorageClearLexemes(ResultStorage *storage)
+{
+	if (storage->lexemes)
+		pfree(storage->lexemes);
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all accepted lexemes
+ */
+static void
+ResultStorageClearAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+		pfree(storage->accepted);
+	storage->accepted = NULL;
+}
+
+/*-------------------
+ * Condition and command execution
+ *-------------------
+ */
+
+/*
+ * Process a token by the dictionary
+ */
+static TSLexeme *
+LexizeExecDictionary(LexizeData *ld, ParsedLex *token, TSMapElement *dictionary)
+{
+	TSLexeme   *res;
+	TSDictionaryCacheEntry *dict;
+	DictSubState subState;
+	Oid			dictId = dictionary->value.objectDictionary;
+
+	if (ld->skipDictionary == dictId)
+		return NULL;
+
+	if (LexemesBufferContains(&ld->buffer, dictionary, token))
+		res = LexemesBufferGet(&ld->buffer, dictionary, token);
+	else
+	{
+		char	   *curValLemm = token->lemm;
+		int			curValLenLemm = token->lenlemm;
+		DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+		dict = lookup_ts_dictionary_cache(dictId);
+
+		if (state)
+		{
+			subState = state->subState;
+			state->processed = true;
+		}
+		else
+		{
+			subState.isend = subState.getnext = false;
+			subState.private_state = NULL;
+		}
+
+		res = (TSLexeme *) DatumGetPointer(FunctionCall4(&(dict->lexize),
+														 PointerGetDatum(dict->dictData),
+														 PointerGetDatum(curValLemm),
+														 Int32GetDatum(curValLenLemm),
+														 PointerGetDatum(&subState)
+														 ));
+
+		if (subState.getnext)
+		{
+			/*
+			 * Dictionary wants next word, so store current context and state
+			 * in the DictStateList
+			 */
+			if (state == NULL)
+			{
+				state = palloc0(sizeof(DictState));
+				state->processed = true;
+				state->relatedDictionary = dictId;
+				state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				state->acceptedTokens.head = state->acceptedTokens.tail = NULL;
+				state->tmpResult = NULL;
+
+				/*
+				 * Add state to the list and update pointer in order to work
+				 * with copy from the list
+				 */
+				state = DictStateListAdd(&ld->dslist, state);
+			}
+
+			state->subState = subState;
+			state->storeToAccepted = res != NULL;
+
+			if (res)
+			{
+				if (state->intermediateTokens.head != NULL)
+				{
+					ParsedLex  *ptr = state->intermediateTokens.head;
+
+					while (ptr)
+					{
+						LPLAddTailCopy(&state->acceptedTokens, ptr);
+						ptr = ptr->next;
+					}
+					state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				}
+
+				if (state->tmpResult)
+					pfree(state->tmpResult);
+				TSLexemeMarkMulti(res);
+				state->tmpResult = res;
+				res = NULL;
+			}
+		}
+		else if (state != NULL)
+		{
+			if (res)
+			{
+				if (state)
+					TSLexemeMarkMulti(res);
+				DictStateListRemove(&ld->dslist, dictId);
+			}
+			else
+			{
+				/*
+				 * Trigger post-processing in order to check tmpResult and
+				 * restart processing (see LexizeExec function)
+				 */
+				state->processed = false;
+			}
+		}
+		LexemesBufferAdd(&ld->buffer, dictionary, token, res);
+	}
+
+	return res;
+}
+
+/*
+ * Check is dictionary waits for more tokens or not
+ */
+static bool
+LexizeExecDictionaryWaitNext(LexizeData *ld, Oid dictId)
+{
+	DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+	if (state)
+		return state->subState.getnext;
+	else
+		return false;
+}
+
+/*
+ * Check is dictionary result for current token is NULL or not.
+ * It dictionary waits for more lexemes, the result is interpreted as not null.
+ */
+static bool
+LexizeExecIsNull(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	bool		result = false;
+
+	if (config->type == TSMAP_EXPRESSION)
+	{
+		TSMapExpression *expression = config->value.objectExpression;
+
+		result = LexizeExecIsNull(ld, token, expression->left) || LexizeExecIsNull(ld, token, expression->right);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		Oid			dictOid = config->value.objectDictionary;
+		TSLexeme   *lexemes = LexizeExecDictionary(ld, token, config);
+
+		if (lexemes)
+			result = false;
+		else
+			result = !LexizeExecDictionaryWaitNext(ld, dictOid);
+	}
+	return result;
+}
+
+/*
+ * Execute a MAP operator
+ */
+static TSLexeme *
+TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression)
+{
+	TSLexeme   *left_res;
+	TSLexeme   *result = NULL;
+	int			left_size;
+	int			i;
+
+	left_res = LexizeExecTSElement(ld, token, expression->left);
+	left_size = TSLexemeGetSize(left_res);
+
+	if (left_res == NULL && LexizeExecIsNull(ld, token, expression->left))
+		result = LexizeExecTSElement(ld, token, expression->right);
+	else if (expression->operator == TSMAP_OP_COMMA &&
+			((left_res != NULL && (left_res->flags & TSL_FILTER) == 0) || left_res == NULL))
+		result = left_res;
+	else
+	{
+		TSMapElement *relatedRuleTmp = NULL;
+		relatedRuleTmp = palloc0(sizeof(TSMapElement));
+		relatedRuleTmp->parent = NULL;
+		relatedRuleTmp->type = TSMAP_EXPRESSION;
+		relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+		relatedRuleTmp->value.objectExpression->operator = expression->operator;
+		relatedRuleTmp->value.objectExpression->left = token->relatedRule;
+
+		for (i = 0; i < left_size; i++)
+		{
+			TSLexeme   *tmp_res = NULL;
+			TSLexeme   *prev_res;
+			ParsedLex	tmp_token;
+
+			tmp_token.lemm = left_res[i].lexeme;
+			tmp_token.lenlemm = strlen(left_res[i].lexeme);
+			tmp_token.type = token->type;
+			tmp_token.next = NULL;
+
+			tmp_res = LexizeExecTSElement(ld, &tmp_token, expression->right);
+			relatedRuleTmp->value.objectExpression->right = tmp_token.relatedRule;
+			prev_res = result;
+			result = TSLexemeUnion(prev_res, tmp_res);
+			if (prev_res)
+				pfree(prev_res);
+		}
+		token->relatedRule = relatedRuleTmp;
+	}
+
+	return result;
+}
+
+/*
+ * Execute a TSMapElement
+ * Common point of all possible types of TSMapElement
+ */
+static TSLexeme *
+LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	TSLexeme   *result = NULL;
+
+	if (LexemesBufferContains(&ld->buffer, config, token))
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexemesBufferGet(&ld->buffer, config, token);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexizeExecDictionary(ld, token, config);
+	}
+	else if (config->type == TSMAP_CASE)
+	{
+		TSMapCase  *caseObject = config->value.objectCase;
+		bool		conditionIsNull = LexizeExecIsNull(ld, token, caseObject->condition);
+
+		if ((!conditionIsNull && caseObject->match) || (conditionIsNull && !caseObject->match))
+		{
+			if (caseObject->command->type == TSMAP_KEEP)
+				result = LexizeExecTSElement(ld, token, caseObject->condition);
+			else
+				result = LexizeExecTSElement(ld, token, caseObject->command);
+		}
+		else if (caseObject->elsebranch)
+			result = LexizeExecTSElement(ld, token, caseObject->elsebranch);
+	}
+	else if (config->type == TSMAP_EXPRESSION)
+	{
+		TSLexeme   *resLeft = NULL;
+		TSLexeme   *resRight = NULL;
+		TSMapElement *relatedRuleTmp = NULL;
+		TSMapExpression *expression = config->value.objectExpression;
+
+		if (expression->operator != TSMAP_OP_MAP && expression->operator != TSMAP_OP_COMMA)
+		{
+			if (ld->debugContext)
+			{
+				relatedRuleTmp = palloc0(sizeof(TSMapElement));
+				relatedRuleTmp->parent = NULL;
+				relatedRuleTmp->type = TSMAP_EXPRESSION;
+				relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+				relatedRuleTmp->value.objectExpression->operator = expression->operator;
+			}
 
-	if (list->head)
-		list->head = list->head->next;
+			resLeft = LexizeExecTSElement(ld, token, expression->left);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->left = token->relatedRule;
 
-	if (list->head == NULL)
-		list->tail = NULL;
+			resRight = LexizeExecTSElement(ld, token, expression->right);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->right = token->relatedRule;
+		}
 
-	return res;
-}
+		switch (expression->operator)
+		{
+			case TSMAP_OP_UNION:
+				result = TSLexemeUnion(resLeft, resRight);
+				break;
+			case TSMAP_OP_EXCEPT:
+				result = TSLexemeExcept(resLeft, resRight);
+				break;
+			case TSMAP_OP_INTERSECT:
+				result = TSLexemeIntersect(resLeft, resRight);
+				break;
+			case TSMAP_OP_MAP:
+			case TSMAP_OP_COMMA:
+				result = TSLexemeMap(ld, token, expression);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains invalid expression operator.")));
+				break;
+		}
 
-static void
-LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
-{
-	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+		if (ld->debugContext && relatedRuleTmp != NULL)
+			token->relatedRule = relatedRuleTmp;
+	}
 
-	newpl->type = type;
-	newpl->lemm = lemm;
-	newpl->lenlemm = lenlemm;
-	LPLAddTail(&ld->towork, newpl);
-	ld->curSub = ld->towork.tail;
+	if (!LexemesBufferContains(&ld->buffer, config, token))
+		LexemesBufferAdd(&ld->buffer, config, token, result);
+
+	return result;
 }
 
-static void
-RemoveHead(LexizeData *ld)
+/*-------------------
+ * LexizeExec and helpers functions
+ *-------------------
+ */
+
+/*
+ * Processing of EOF-like token.
+ * Return all temporary results if any are saved.
+ */
+static TSLexeme *
+LexizeExecFinishProcessing(LexizeData *ld)
 {
-	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+	int			i;
+	TSLexeme   *res = NULL;
+
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		TSLexeme   *last_res = res;
 
-	ld->posDict = 0;
+		res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+		if (last_res)
+			pfree(last_res);
+	}
+
+	return res;
 }
 
-static void
-setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+/*
+ * Get last accepted result of the phrase-dictionary
+ */
+static TSLexeme *
+LexizeExecGetPreviousResults(LexizeData *ld)
 {
-	if (correspondLexem)
-	{
-		*correspondLexem = ld->waste.head;
-	}
-	else
-	{
-		ParsedLex  *tmp,
-				   *ptr = ld->waste.head;
+	int			i;
+	TSLexeme   *res = NULL;
 
-		while (ptr)
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		if (!ld->dslist.states[i].processed)
 		{
-			tmp = ptr->next;
-			pfree(ptr);
-			ptr = tmp;
+			TSLexeme   *last_res = res;
+
+			res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+			if (last_res)
+				pfree(last_res);
 		}
 	}
-	ld->waste.head = ld->waste.tail = NULL;
+
+	return res;
 }
 
+/*
+ * Remove all dictionary states which wasn't used for current token
+ */
 static void
-moveToWaste(LexizeData *ld, ParsedLex *stop)
+LexizeExecClearDictStates(LexizeData *ld)
 {
-	bool		go = true;
+	int			i;
 
-	while (ld->towork.head && go)
+	for (i = 0; i < ld->dslist.listLength; i++)
 	{
-		if (ld->towork.head == stop)
+		if (!ld->dslist.states[i].processed)
 		{
-			ld->curSub = stop->next;
-			go = false;
+			DictStateListRemove(&ld->dslist, ld->dslist.states[i].relatedDictionary);
+			i = 0;
 		}
-		RemoveHead(ld);
 	}
 }
 
-static void
-setNewTmpRes(LexizeData *ld, ParsedLex *lex, TSLexeme *res)
+/*
+ * Check if there are any dictionaries that didn't processed current token
+ */
+static bool
+LexizeExecNotProcessedDictStates(LexizeData *ld)
 {
-	if (ld->tmpRes)
-	{
-		TSLexeme   *ptr;
+	int			i;
 
-		for (ptr = ld->tmpRes; ptr->lexeme; ptr++)
-			pfree(ptr->lexeme);
-		pfree(ld->tmpRes);
-	}
-	ld->tmpRes = res;
-	ld->lastRes = lex;
+	for (i = 0; i < ld->dslist.listLength; i++)
+		if (!ld->dslist.states[i].processed)
+			return true;
+
+	return false;
 }
 
+/*
+ * Do a lexize processing for a towork queue in LexizeData
+ */
 static TSLexeme *
 LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 {
+	ParsedLex  *token;
+	TSMapElement *config;
+	TSLexeme   *res = NULL;
+	TSLexeme   *prevIterationResult = NULL;
+	bool		removeHead = false;
+	bool		resetSkipDictionary = false;
+	bool		accepted = false;
 	int			i;
-	ListDictionary *map;
-	TSDictionaryCacheEntry *dict;
-	TSLexeme   *res;
 
-	if (ld->curDictId == InvalidOid)
+	for (i = 0; i < ld->dslist.listLength; i++)
+		ld->dslist.states[i].processed = false;
+	if (ld->skipDictionary != InvalidOid)
+		resetSkipDictionary = true;
+
+	token = ld->towork.head;
+	if (token == NULL)
 	{
-		/*
-		 * usual mode: dictionary wants only one word, but we should keep in
-		 * mind that we should go through all stack
-		 */
+		setCorrLex(ld, correspondLexem);
+		return NULL;
+	}
 
-		while (ld->towork.head)
+	if (token->type >= ld->cfg->lenmap)
+	{
+		removeHead = true;
+	}
+	else
+	{
+		config = ld->cfg->map[token->type];
+		if (config != NULL)
+		{
+			res = LexizeExecTSElement(ld, token, config);
+			prevIterationResult = LexizeExecGetPreviousResults(ld);
+			removeHead = prevIterationResult == NULL;
+		}
+		else
 		{
-			ParsedLex  *curVal = ld->towork.head;
-			char	   *curValLemm = curVal->lemm;
-			int			curValLenLemm = curVal->lenlemm;
+			removeHead = true;
+			if (token->type == 0)	/* Processing EOF-like token */
+			{
+				res = LexizeExecFinishProcessing(ld);
+				prevIterationResult = NULL;
+			}
+		}
 
-			map = ld->cfg->map + curVal->type;
+		if (LexizeExecNotProcessedDictStates(ld) && (token->type == 0 || config != NULL))	/* Rollback processing */
+		{
+			int			i;
+			ListParsedLex *intermediateTokens = NULL;
+			ListParsedLex *acceptedTokens = NULL;
 
-			if (curVal->type == 0 || curVal->type >= ld->cfg->lenmap || map->len == 0)
+			for (i = 0; i < ld->dslist.listLength; i++)
 			{
-				/* skip this type of lexeme */
-				RemoveHead(ld);
-				continue;
+				if (!ld->dslist.states[i].processed)
+				{
+					intermediateTokens = &ld->dslist.states[i].intermediateTokens;
+					acceptedTokens = &ld->dslist.states[i].acceptedTokens;
+					if (prevIterationResult == NULL)
+						ld->skipDictionary = ld->dslist.states[i].relatedDictionary;
+				}
 			}
 
-			for (i = ld->posDict; i < map->len; i++)
+			if (intermediateTokens && intermediateTokens->head)
 			{
-				dict = lookup_ts_dictionary_cache(map->dictIds[i]);
-
-				ld->dictState.isend = ld->dictState.getnext = false;
-				ld->dictState.private_state = NULL;
-				res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-																 &(dict->lexize),
-																 PointerGetDatum(dict->dictData),
-																 PointerGetDatum(curValLemm),
-																 Int32GetDatum(curValLenLemm),
-																 PointerGetDatum(&ld->dictState)
-																 ));
-
-				if (ld->dictState.getnext)
+				ParsedLex  *head = ld->towork.head;
+
+				ld->towork.head = intermediateTokens->head;
+				intermediateTokens->tail->next = head;
+				head->next = NULL;
+				ld->towork.tail = head;
+				removeHead = false;
+				LPLClear(&ld->waste);
+				if (acceptedTokens && acceptedTokens->head)
 				{
-					/*
-					 * dictionary wants next word, so setup and store current
-					 * position and go to multiword mode
-					 */
-
-					ld->curDictId = DatumGetObjectId(map->dictIds[i]);
-					ld->posDict = i + 1;
-					ld->curSub = curVal->next;
-					if (res)
-						setNewTmpRes(ld, curVal, res);
-					return LexizeExec(ld, correspondLexem);
+					ld->waste.head = acceptedTokens->head;
+					ld->waste.tail = acceptedTokens->tail;
 				}
+			}
+			ResultStorageClearLexemes(&ld->delayedResults);
+			if (config != NULL)
+				res = NULL;
+		}
 
-				if (!res)		/* dictionary doesn't know this lexeme */
-					continue;
+		if (config != NULL)
+			LexizeExecClearDictStates(ld);
+		else if (token->type == 0)
+			DictStateListClear(&ld->dslist);
+	}
 
-				if (res->flags & TSL_FILTER)
-				{
-					curValLemm = res->lexeme;
-					curValLenLemm = strlen(res->lexeme);
-					continue;
-				}
+	if (prevIterationResult)
+		res = prevIterationResult;
+	else
+	{
+		int			i;
 
-				RemoveHead(ld);
-				setCorrLex(ld, correspondLexem);
-				return res;
+		for (i = 0; i < ld->dslist.listLength; i++)
+		{
+			if (ld->dslist.states[i].storeToAccepted)
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].acceptedTokens, token);
+				accepted = true;
+				ld->dslist.states[i].storeToAccepted = false;
+			}
+			else
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].intermediateTokens, token);
 			}
-
-			RemoveHead(ld);
 		}
 	}
-	else
-	{							/* curDictId is valid */
-		dict = lookup_ts_dictionary_cache(ld->curDictId);
 
+	if (removeHead)
+		RemoveHead(ld);
+
+	if (ld->dslist.listLength > 0)
+	{
 		/*
-		 * Dictionary ld->curDictId asks  us about following words
+		 * There is at least one thesaurus dictionary in the middle of
+		 * processing. Delay return of the result to avoid wrong lexemes in
+		 * case of thesaurus phrase rejection.
 		 */
+		ResultStorageAdd(&ld->delayedResults, token, res);
+		if (accepted)
+			ResultStorageMoveToAccepted(&ld->delayedResults);
 
-		while (ld->curSub)
+		/*
+		 * Current value of res should not be cleared, because it is stored in
+		 * LexemesBuffer
+		 */
+		res = NULL;
+	}
+	else
+	{
+		if (ld->towork.head == NULL)
 		{
-			ParsedLex  *curVal = ld->curSub;
-
-			map = ld->cfg->map + curVal->type;
-
-			if (curVal->type != 0)
-			{
-				bool		dictExists = false;
-
-				if (curVal->type >= ld->cfg->lenmap || map->len == 0)
-				{
-					/* skip this type of lexeme */
-					ld->curSub = curVal->next;
-					continue;
-				}
+			TSLexeme   *oldAccepted = ld->delayedResults.accepted;
 
-				/*
-				 * We should be sure that current type of lexeme is recognized
-				 * by our dictionary: we just check is it exist in list of
-				 * dictionaries ?
-				 */
-				for (i = 0; i < map->len && !dictExists; i++)
-					if (ld->curDictId == DatumGetObjectId(map->dictIds[i]))
-						dictExists = true;
-
-				if (!dictExists)
-				{
-					/*
-					 * Dictionary can't work with current tpe of lexeme,
-					 * return to basic mode and redo all stored lexemes
-					 */
-					ld->curDictId = InvalidOid;
-					return LexizeExec(ld, correspondLexem);
-				}
-			}
+			ld->delayedResults.accepted = TSLexemeUnionOpt(ld->delayedResults.accepted, ld->delayedResults.lexemes, true);
+			if (oldAccepted)
+				pfree(oldAccepted);
+		}
 
-			ld->dictState.isend = (curVal->type == 0) ? true : false;
-			ld->dictState.getnext = false;
+		/*
+		 * Add accepted delayed results to the output of the parsing. All
+		 * lexemes returned during thesaurus phrase processing should be
+		 * returned simultaneously, since all phrase tokens are processed as
+		 * one.
+		 */
+		if (ld->delayedResults.accepted != NULL)
+		{
+			/*
+			 * Previous value of res should not be cleared, because it is
+			 * stored in LexemesBuffer
+			 */
+			res = TSLexemeUnionOpt(ld->delayedResults.accepted, res, prevIterationResult == NULL);
 
-			res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-															 &(dict->lexize),
-															 PointerGetDatum(dict->dictData),
-															 PointerGetDatum(curVal->lemm),
-															 Int32GetDatum(curVal->lenlemm),
-															 PointerGetDatum(&ld->dictState)
-															 ));
+			ResultStorageClearLexemes(&ld->delayedResults);
+			ResultStorageClearAccepted(&ld->delayedResults);
+		}
+		setCorrLex(ld, correspondLexem);
+	}
 
-			if (ld->dictState.getnext)
-			{
-				/* Dictionary wants one more */
-				ld->curSub = curVal->next;
-				if (res)
-					setNewTmpRes(ld, curVal, res);
-				continue;
-			}
+	if (resetSkipDictionary)
+		ld->skipDictionary = InvalidOid;
 
-			if (res || ld->tmpRes)
-			{
-				/*
-				 * Dictionary normalizes lexemes, so we remove from stack all
-				 * used lexemes, return to basic mode and redo end of stack
-				 * (if it exists)
-				 */
-				if (res)
-				{
-					moveToWaste(ld, ld->curSub);
-				}
-				else
-				{
-					res = ld->tmpRes;
-					moveToWaste(ld, ld->lastRes);
-				}
+	res = TSLexemeFilterMulti(res);
+	if (res)
+		res = TSLexemeRemoveDuplications(res);
 
-				/* reset to initial state */
-				ld->curDictId = InvalidOid;
-				ld->posDict = 0;
-				ld->lastRes = NULL;
-				ld->tmpRes = NULL;
-				setCorrLex(ld, correspondLexem);
-				return res;
-			}
+	/*
+	 * Copy result since it may be stored in LexemesBuffere and removed at the
+	 * next step.
+	 */
+	if (res)
+	{
+		TSLexeme   *oldRes = res;
+		int			resSize = TSLexemeGetSize(res);
 
-			/*
-			 * Dict don't want next lexem and didn't recognize anything, redo
-			 * from ld->towork.head
-			 */
-			ld->curDictId = InvalidOid;
-			return LexizeExec(ld, correspondLexem);
-		}
+		res = palloc0(sizeof(TSLexeme) * (resSize + 1));
+		memcpy(res, oldRes, sizeof(TSLexeme) * resSize);
 	}
 
-	setCorrLex(ld, correspondLexem);
-	return NULL;
+	LexemesBufferClear(&ld->buffer);
+	return res;
 }
 
+/*-------------------
+ * ts_parse API functions
+ *-------------------
+ */
+
 /*
  * Parse string and lexize words.
  *
@@ -357,7 +1473,7 @@ LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 void
 parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
@@ -375,36 +1491,42 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		while ((norms = LexizeExec(&ldata, NULL)) != NULL)
 		{
-			TSLexeme   *ptr = norms;
+			TSLexeme   *ptr;
+
+			ptr = norms;
 
 			prs->pos++;			/* set pos */
 
@@ -429,14 +1551,246 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 			}
 			pfree(norms);
 		}
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
 
+/*-------------------
+ * ts_debug and helper functions
+ *-------------------
+ */
+
+/*
+ * Free memory occupied by temporary TSMapElement
+ */
+
+static void
+ts_debug_free_rule(TSMapElement *element)
+{
+	if (element != NULL && element->type == TSMAP_EXPRESSION)
+	{
+		ts_debug_free_rule(element->value.objectExpression->left);
+		ts_debug_free_rule(element->value.objectExpression->right);
+		pfree(element->value.objectExpression);
+		pfree(element);
+	}
+}
+
+/*
+ * Initialize SRF context and text parser for ts_debug execution.
+ */
+static void
+ts_debug_init(Oid cfgId, text *inputText, FunctionCallInfo fcinfo)
+{
+	TupleDesc	tupdesc;
+	char	   *buf;
+	int			buflen;
+	FuncCallContext *funcctx;
+	MemoryContext oldcontext;
+	TSDebugContext *context;
+
+	funcctx = SRF_FIRSTCALL_INIT();
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+	buf = text_to_cstring(inputText);
+	buflen = strlen(buf);
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("function returning record called in context "
+						"that cannot accept type record")));
+
+	funcctx->user_fctx = palloc0(sizeof(TSDebugContext));
+	funcctx->attinmeta = TupleDescGetAttInMetadata(tupdesc);
+
+	context = funcctx->user_fctx;
+	context->cfg = lookup_ts_config_cache(cfgId);
+	context->prsobj = lookup_ts_parser_cache(context->cfg->prsId);
+
+	context->tokenTypes = (LexDescr *) DatumGetPointer(OidFunctionCall1(context->prsobj->lextypeOid,
+																		(Datum) 0));
+
+	context->prsdata = (void *) DatumGetPointer(FunctionCall2(&context->prsobj->prsstart,
+															  PointerGetDatum(buf),
+															  Int32GetDatum(buflen)));
+	LexizeInit(&context->ldata, context->cfg);
+	context->ldata.debugContext = true;
+	context->tokentype = 1;
+
+	MemoryContextSwitchTo(oldcontext);
+}
+
+/*
+ * Get one token from input text and add it to processing queue.
+ */
+static void
+ts_debug_get_token(FuncCallContext *funcctx)
+{
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+	int			lenlemm;
+	char	   *lemm = NULL;
+
+	context = funcctx->user_fctx;
+
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+	context->tokentype = DatumGetInt32(FunctionCall3(&(context->prsobj->prstoken),
+													 PointerGetDatum(context->prsdata),
+													 PointerGetDatum(&lemm),
+													 PointerGetDatum(&lenlemm)));
+
+	if (context->tokentype > 0 && lenlemm >= MAXSTRLEN)
+	{
+#ifdef IGNORE_LONGLEXEME
+		ereport(NOTICE,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#else
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#endif
+	}
+
+	LexizeAddLemm(&context->ldata, context->tokentype, lemm, lenlemm);
+	MemoryContextSwitchTo(oldcontext);
+}
+
 /*
+ * Parse text and print debug information, such as token type, dictionary map
+ * configuration, selected command and lexemes for each token.
+ * Arguments: regconfiguration(Oid) cfgId, text *inputText
+ */
+Datum
+ts_debug(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		Oid			cfgId = PG_GETARG_OID(0);
+		text	   *inputText = PG_GETARG_TEXT_P(1);
+
+		ts_debug_init(cfgId, inputText, fcinfo);
+	}
+
+	funcctx = SRF_PERCALL_SETUP();
+	context = funcctx->user_fctx;
+
+	while (context->tokentype > 0 && context->leftTokens == NULL)
+	{
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+		ts_debug_get_token(funcctx);
+
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	while (context->leftTokens == NULL && context->ldata.towork.head != NULL)
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+	if (context->leftTokens && context->leftTokens && context->leftTokens->type > 0)
+	{
+		HeapTuple	tuple;
+		Datum		result;
+		char	  **values;
+		ParsedLex  *lex = context->leftTokens;
+		StringInfo	str = NULL;
+		TSLexeme   *ptr;
+
+		values = palloc0(sizeof(char *) * 7);
+		str = makeStringInfo();
+		initStringInfo(str);
+
+		values[0] = context->tokenTypes[lex->type - 1].alias;
+		values[1] = context->tokenTypes[lex->type - 1].descr;
+
+		values[2] = palloc0(sizeof(char) * (lex->lenlemm + 1));
+		memcpy(values[2], lex->lemm, sizeof(char) * lex->lenlemm);
+
+		initStringInfo(str);
+		appendStringInfoChar(str, '{');
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			Oid *dictionaries = TSMapGetDictionaries(context->ldata.cfg->map[lex->type]);
+			Oid *currentDictionary = NULL;
+			for (currentDictionary = dictionaries; *currentDictionary != InvalidOid; currentDictionary++)
+			{
+				if (currentDictionary != dictionaries)
+					appendStringInfoChar(str, ',');
+
+				TSMapPrintDictName(*currentDictionary, str);
+			}
+		}
+		appendStringInfoChar(str, '}');
+		values[3] = str->data;
+
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			initStringInfo(str);
+			TSMapPrintElement(context->ldata.cfg->map[lex->type], str);
+			values[4] = str->data;
+
+			initStringInfo(str);
+			if (lex->relatedRule)
+			{
+				TSMapPrintElement(lex->relatedRule, str);
+				values[5] = str->data;
+				str = makeStringInfo();
+				initStringInfo(str);
+				ts_debug_free_rule(lex->relatedRule);
+				lex->relatedRule = NULL;
+			}
+		}
+
+		initStringInfo(str);
+		ptr = context->savedLexemes;
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '{');
+
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr != context->savedLexemes)
+				appendStringInfoString(str, ", ");
+			appendStringInfoString(str, ptr->lexeme);
+			ptr++;
+		}
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '}');
+		if (context->savedLexemes)
+			values[6] = str->data;
+		else
+			values[6] = NULL;
+
+		tuple = BuildTupleFromCStrings(funcctx->attinmeta, values);
+		result = HeapTupleGetDatum(tuple);
+
+		context->leftTokens = lex->next;
+		pfree(lex);
+		if (context->leftTokens == NULL && context->savedLexemes)
+			pfree(context->savedLexemes);
+
+		SRF_RETURN_NEXT(funcctx, result);
+	}
+
+	FunctionCall1(&(context->prsobj->prsend), PointerGetDatum(context->prsdata));
+	SRF_RETURN_DONE(funcctx);
+}
+
+/*-------------------
  * Headline framework
+ *-------------------
  */
+
 static void
 hladdword(HeadlineParsedText *prs, char *buf, int buflen, int type)
 {
@@ -532,12 +1886,12 @@ addHLParsedLex(HeadlineParsedText *prs, TSQuery query, ParsedLex *lexs, TSLexeme
 void
 hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
 	TSLexeme   *norms;
-	ParsedLex  *lexs;
+	ParsedLex  *lexs = NULL;
 	TSConfigCacheEntry *cfg;
 	TSParserCacheEntry *prsobj;
 	void	   *prsdata;
@@ -551,32 +1905,36 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		do
 		{
@@ -587,9 +1945,10 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 			}
 			else
 				addHLParsedLex(prs, query, lexs, NULL);
+			lexs = NULL;
 		} while (norms);
 
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
@@ -642,14 +2001,14 @@ generateHeadline(HeadlineParsedText *prs)
 			}
 			else if (!wrd->skip)
 			{
-				if (wrd->selected)
+				if (wrd->selected && (wrd == prs->words || !(wrd - 1)->selected))
 				{
 					memcpy(ptr, prs->startsel, prs->startsellen);
 					ptr += prs->startsellen;
 				}
 				memcpy(ptr, wrd->word, wrd->len);
 				ptr += wrd->len;
-				if (wrd->selected)
+				if (wrd->selected && ((wrd + 1 - prs->words) == prs->curwords || !(wrd + 1)->selected))
 				{
 					memcpy(ptr, prs->stopsel, prs->stopsellen);
 					ptr += prs->stopsellen;
diff --git a/src/backend/tsearch/ts_utils.c b/src/backend/tsearch/ts_utils.c
index f6e03aea4f..0dd846bece 100644
--- a/src/backend/tsearch/ts_utils.c
+++ b/src/backend/tsearch/ts_utils.c
@@ -20,7 +20,6 @@
 #include "tsearch/ts_locale.h"
 #include "tsearch/ts_utils.h"
 
-
 /*
  * Given the base name and extension of a tsearch config file, return
  * its full path name.  The base name is assumed to be user-supplied,
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 2b381782a3..f251e83ff6 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -828,11 +828,10 @@ static const struct cachedesc cacheinfo[] = {
 	},
 	{TSConfigMapRelationId,		/* TSCONFIGMAP */
 		TSConfigMapIndexId,
-		3,
+		2,
 		{
 			Anum_pg_ts_config_map_mapcfg,
 			Anum_pg_ts_config_map_maptokentype,
-			Anum_pg_ts_config_map_mapseqno,
 			0
 		},
 		2
diff --git a/src/backend/utils/cache/ts_cache.c b/src/backend/utils/cache/ts_cache.c
index 97347780d3..1ff1a9255c 100644
--- a/src/backend/utils/cache/ts_cache.c
+++ b/src/backend/utils/cache/ts_cache.c
@@ -39,6 +39,7 @@
 #include "catalog/pg_ts_template.h"
 #include "commands/defrem.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/catcache.h"
 #include "utils/fmgroids.h"
@@ -51,13 +52,12 @@
 
 
 /*
- * MAXTOKENTYPE/MAXDICTSPERTT are arbitrary limits on the workspace size
+ * MAXTOKENTYPE is arbitrary limits on the workspace size
  * used in lookup_ts_config_cache().  We could avoid hardwiring a limit
  * by making the workspace dynamically enlargeable, but it seems unlikely
  * to be worth the trouble.
  */
-#define MAXTOKENTYPE	256
-#define MAXDICTSPERTT	100
+#define MAXTOKENTYPE		256
 
 
 static HTAB *TSParserCacheHash = NULL;
@@ -418,11 +418,10 @@ lookup_ts_config_cache(Oid cfgId)
 		ScanKeyData mapskey;
 		SysScanDesc mapscan;
 		HeapTuple	maptup;
-		ListDictionary maplists[MAXTOKENTYPE + 1];
-		Oid			mapdicts[MAXDICTSPERTT];
+		TSMapElement *mapconfigs[MAXTOKENTYPE + 1];
 		int			maxtokentype;
-		int			ndicts;
 		int			i;
+		TSMapElement *tmpConfig;
 
 		tp = SearchSysCache1(TSCONFIGOID, ObjectIdGetDatum(cfgId));
 		if (!HeapTupleIsValid(tp))
@@ -453,8 +452,8 @@ lookup_ts_config_cache(Oid cfgId)
 			if (entry->map)
 			{
 				for (i = 0; i < entry->lenmap; i++)
-					if (entry->map[i].dictIds)
-						pfree(entry->map[i].dictIds);
+					if (entry->map[i])
+						TSMapElementFree(entry->map[i]);
 				pfree(entry->map);
 			}
 		}
@@ -468,13 +467,11 @@ lookup_ts_config_cache(Oid cfgId)
 		/*
 		 * Scan pg_ts_config_map to gather dictionary list for each token type
 		 *
-		 * Because the index is on (mapcfg, maptokentype, mapseqno), we will
-		 * see the entries in maptokentype order, and in mapseqno order for
-		 * each token type, even though we didn't explicitly ask for that.
+		 * Because the index is on (mapcfg, maptokentype), we will see the
+		 * entries in maptokentype order even though we didn't explicitly ask
+		 * for that.
 		 */
-		MemSet(maplists, 0, sizeof(maplists));
 		maxtokentype = 0;
-		ndicts = 0;
 
 		ScanKeyInit(&mapskey,
 					Anum_pg_ts_config_map_mapcfg,
@@ -486,6 +483,7 @@ lookup_ts_config_cache(Oid cfgId)
 		mapscan = systable_beginscan_ordered(maprel, mapidx,
 											 NULL, 1, &mapskey);
 
+		memset(mapconfigs, 0, sizeof(mapconfigs));
 		while ((maptup = systable_getnext_ordered(mapscan, ForwardScanDirection)) != NULL)
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
@@ -495,51 +493,27 @@ lookup_ts_config_cache(Oid cfgId)
 				elog(ERROR, "maptokentype value %d is out of range", toktype);
 			if (toktype < maxtokentype)
 				elog(ERROR, "maptokentype entries are out of order");
-			if (toktype > maxtokentype)
-			{
-				/* starting a new token type, but first save the prior data */
-				if (ndicts > 0)
-				{
-					maplists[maxtokentype].len = ndicts;
-					maplists[maxtokentype].dictIds = (Oid *)
-						MemoryContextAlloc(CacheMemoryContext,
-										   sizeof(Oid) * ndicts);
-					memcpy(maplists[maxtokentype].dictIds, mapdicts,
-						   sizeof(Oid) * ndicts);
-				}
-				maxtokentype = toktype;
-				mapdicts[0] = cfgmap->mapdict;
-				ndicts = 1;
-			}
-			else
-			{
-				/* continuing data for current token type */
-				if (ndicts >= MAXDICTSPERTT)
-					elog(ERROR, "too many pg_ts_config_map entries for one token type");
-				mapdicts[ndicts++] = cfgmap->mapdict;
-			}
+
+			maxtokentype = toktype;
+			tmpConfig = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			mapconfigs[maxtokentype] = TSMapMoveToMemoryContext(tmpConfig, CacheMemoryContext);
+			TSMapElementFree(tmpConfig);
+			tmpConfig = NULL;
 		}
 
 		systable_endscan_ordered(mapscan);
 		index_close(mapidx, AccessShareLock);
 		heap_close(maprel, AccessShareLock);
 
-		if (ndicts > 0)
+		if (maxtokentype > 0)
 		{
-			/* save the last token type's dictionaries */
-			maplists[maxtokentype].len = ndicts;
-			maplists[maxtokentype].dictIds = (Oid *)
-				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(Oid) * ndicts);
-			memcpy(maplists[maxtokentype].dictIds, mapdicts,
-				   sizeof(Oid) * ndicts);
-			/* and save the overall map */
+			/* save the overall map */
 			entry->lenmap = maxtokentype + 1;
-			entry->map = (ListDictionary *)
+			entry->map = (TSMapElement * *)
 				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(ListDictionary) * entry->lenmap);
-			memcpy(entry->map, maplists,
-				   sizeof(ListDictionary) * entry->lenmap);
+								   sizeof(TSMapElement *) * entry->lenmap);
+			memcpy(entry->map, mapconfigs,
+				   sizeof(TSMapElement *) * entry->lenmap);
 		}
 
 		entry->isvalid = true;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index d066f4f00b..c5cb3c62f7 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -14223,15 +14223,29 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 	PQclear(res);
 
 	resetPQExpBuffer(query);
-	appendPQExpBuffer(query,
-					  "SELECT\n"
-					  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
-					  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
-					  "FROM pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE m.mapcfg = '%u'\n"
-					  "ORDER BY m.mapcfg, m.maptokentype, m.mapseqno",
-					  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+
+	if (fout->remoteVersion >= 110000)
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+	else
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, m.mapseqno\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
 	ntups = PQntuples(res);
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 0c3be1f504..729242e8e0 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -4646,25 +4646,41 @@ describeOneTSConfig(const char *oid, const char *nspname, const char *cfgname,
 
 	initPQExpBuffer(&buf);
 
-	printfPQExpBuffer(&buf,
-					  "SELECT\n"
-					  "  ( SELECT t.alias FROM\n"
-					  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
-					  "  pg_catalog.btrim(\n"
-					  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
-					  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
-					  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
-					  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
-					  "    ) :: pg_catalog.text,\n"
-					  "  '{}') AS \"%s\"\n"
-					  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
-					  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
-					  "ORDER BY 1;",
-					  gettext_noop("Token"),
-					  gettext_noop("Dictionaries"),
-					  oid);
+	if (pset.sversion >= 110000)
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  " dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+	else
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  "  pg_catalog.btrim(\n"
+						  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
+						  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
+						  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
+						  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
+						  "    ) :: pg_catalog.text,\n"
+						  "  '{}') AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+
 
 	res = PSQLexec(buf.data);
 	termPQExpBuffer(&buf);
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index 7dd9d108d6..589bce476b 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -262,7 +262,7 @@ DECLARE_UNIQUE_INDEX(pg_ts_config_cfgname_index, 3608, on pg_ts_config using btr
 DECLARE_UNIQUE_INDEX(pg_ts_config_oid_index, 3712, on pg_ts_config using btree(oid oid_ops));
 #define TSConfigOidIndexId	3712
 
-DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops, mapseqno int4_ops));
+DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops));
 #define TSConfigMapIndexId	3609
 
 DECLARE_UNIQUE_INDEX(pg_ts_dict_dictname_index, 3604, on pg_ts_dict using btree(dictname name_ops, dictnamespace oid_ops));
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 9bf20c059b..bd9549ac39 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -4988,6 +4988,12 @@ DESCR("transform jsonb to tsvector");
 DATA(insert OID = 4212 (  to_tsvector		PGNSP PGUID 12 100 0 0 0 f f f t f i s 2 0 3614 "3734 114" _null_ _null_ _null_ _null_ _null_ json_to_tsvector_byid _null_ _null_ _null_ ));
 DESCR("transform json to tsvector");
 
+DATA(insert OID = 8891 (  dictionary_mapping_to_text	PGNSP PGUID 12 100 0 0 0 f f f t f s s 2 0 25 "26 23" _null_ _null_ _null_ _null_ _null_ dictionary_mapping_to_text _null_ _null_ _null_ ));
+DESCR("returns text representation of dictionary configuration map");
+
+DATA(insert OID = 8892 (  ts_debug			PGNSP PGUID 12 100 1 0 0 f f f t t s s 2 0 2249 "3734 25" "{3734,25,25,25,25,3770,25,25,1009}" "{i,i,o,o,o,o,o,o,o}" "{cfgId,inputText,alias,description,token,dictionaries,configuration,command,lexemes}" _null_ _null_ ts_debug _null_ _null_ _null_));
+DESCR("debug function for text search configuration");
+
 DATA(insert OID = 3752 (  tsvector_update_trigger			PGNSP PGUID 12 1 0 0 0 f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_byid _null_ _null_ _null_ ));
 DESCR("trigger for automatic update of tsvector column");
 DATA(insert OID = 3753 (  tsvector_update_trigger_column	PGNSP PGUID 12 1 0 0 0 f f f f f v s 0 0 2279 "" _null_ _null_ _null_ _null_ _null_ tsvector_update_trigger_bycolumn _null_ _null_ _null_ ));
diff --git a/src/include/catalog/pg_ts_config_map.h b/src/include/catalog/pg_ts_config_map.h
index a3d9e3f21f..65a9a73369 100644
--- a/src/include/catalog/pg_ts_config_map.h
+++ b/src/include/catalog/pg_ts_config_map.h
@@ -22,6 +22,7 @@
 #define PG_TS_CONFIG_MAP_H
 
 #include "catalog/genbki.h"
+#include "utils/jsonb.h"
 
 /* ----------------
  *		pg_ts_config_map definition.  cpp turns this into
@@ -30,49 +31,114 @@
  */
 #define TSConfigMapRelationId	3603
 
+/*
+ * Create a typedef in order to use same type name in
+ * generated DB initialization script and C source code
+ */
+typedef Jsonb jsonb;
+
 CATALOG(pg_ts_config_map,3603) BKI_WITHOUT_OIDS
 {
 	Oid			mapcfg;			/* OID of configuration owning this entry */
 	int32		maptokentype;	/* token type from parser */
-	int32		mapseqno;		/* order in which to consult dictionaries */
-	Oid			mapdict;		/* dictionary to consult */
+
+	/*
+	 * mapdicts is the only one variable-length field so it is safe to use
+	 * it directly, without hiding from C interface.
+	 */
+	jsonb		mapdicts;		/* dictionary map Jsonb representation */
 } FormData_pg_ts_config_map;
 
 typedef FormData_pg_ts_config_map *Form_pg_ts_config_map;
 
+/*
+ * Element of the mapping expression tree
+ */
+typedef struct TSMapElement
+{
+	int			type; /* Type of the element */
+	union
+	{
+		struct TSMapExpression *objectExpression;
+		struct TSMapCase *objectCase;
+		Oid			objectDictionary;
+		void	   *object;
+	} value;
+	struct TSMapElement *parent; /* Parent in the expression tree */
+} TSMapElement;
+
+/*
+ * Representation of expression with operator and two operands
+ */
+typedef struct TSMapExpression
+{
+	int			operator;
+	TSMapElement *left;
+	TSMapElement *right;
+} TSMapExpression;
+
+/*
+ * Representation of CASE structure inside database
+ */
+typedef struct TSMapCase
+{
+	TSMapElement *condition;
+	TSMapElement *command;
+	TSMapElement *elsebranch;
+	bool		match;	/* If false, NO MATCH is used */
+} TSMapCase;
+
 /* ----------------
- *		compiler constants for pg_ts_config_map
+ *		Compiler constants for pg_ts_config_map
  * ----------------
  */
-#define Natts_pg_ts_config_map				4
+#define Natts_pg_ts_config_map				3
 #define Anum_pg_ts_config_map_mapcfg		1
 #define Anum_pg_ts_config_map_maptokentype	2
-#define Anum_pg_ts_config_map_mapseqno		3
-#define Anum_pg_ts_config_map_mapdict		4
+#define Anum_pg_ts_config_map_mapdicts		3
+
+/* ----------------
+ *		Dictionary map operators
+ * ----------------
+ */
+#define TSMAP_OP_MAP			1
+#define TSMAP_OP_UNION			2
+#define TSMAP_OP_EXCEPT			3
+#define TSMAP_OP_INTERSECT		4
+#define TSMAP_OP_COMMA			5
+
+/* ----------------
+ *		TSMapElement object types
+ * ----------------
+ */
+#define TSMAP_EXPRESSION	1
+#define TSMAP_CASE			2
+#define TSMAP_DICTIONARY	3
+#define TSMAP_KEEP			4
 
 /* ----------------
  *		initial contents of pg_ts_config_map
  * ----------------
  */
 
-DATA(insert ( 3748	1	1	3765 ));
-DATA(insert ( 3748	2	1	3765 ));
-DATA(insert ( 3748	3	1	3765 ));
-DATA(insert ( 3748	4	1	3765 ));
-DATA(insert ( 3748	5	1	3765 ));
-DATA(insert ( 3748	6	1	3765 ));
-DATA(insert ( 3748	7	1	3765 ));
-DATA(insert ( 3748	8	1	3765 ));
-DATA(insert ( 3748	9	1	3765 ));
-DATA(insert ( 3748	10	1	3765 ));
-DATA(insert ( 3748	11	1	3765 ));
-DATA(insert ( 3748	15	1	3765 ));
-DATA(insert ( 3748	16	1	3765 ));
-DATA(insert ( 3748	17	1	3765 ));
-DATA(insert ( 3748	18	1	3765 ));
-DATA(insert ( 3748	19	1	3765 ));
-DATA(insert ( 3748	20	1	3765 ));
-DATA(insert ( 3748	21	1	3765 ));
-DATA(insert ( 3748	22	1	3765 ));
+DATA(insert ( 3748	1	"[3765]" ));
+DATA(insert ( 3748	2	"[3765]" ));
+DATA(insert ( 3748	3	"[3765]" ));
+DATA(insert ( 3748	4	"[3765]" ));
+DATA(insert ( 3748	5	"[3765]" ));
+DATA(insert ( 3748	6	"[3765]" ));
+DATA(insert ( 3748	7	"[3765]" ));
+DATA(insert ( 3748	8	"[3765]" ));
+DATA(insert ( 3748	9	"[3765]" ));
+DATA(insert ( 3748	10	"[3765]" ));
+DATA(insert ( 3748	11	"[3765]" ));
+DATA(insert ( 3748	15	"[3765]" ));
+DATA(insert ( 3748	16	"[3765]" ));
+DATA(insert ( 3748	17	"[3765]" ));
+DATA(insert ( 3748	18	"[3765]" ));
+DATA(insert ( 3748	19	"[3765]" ));
+DATA(insert ( 3748	20	"[3765]" ));
+DATA(insert ( 3748	21	"[3765]" ));
+DATA(insert ( 3748	22	"[3765]" ));
 
 #endif							/* PG_TS_CONFIG_MAP_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index fce48026b6..1d3896d494 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -385,6 +385,9 @@ typedef enum NodeTag
 	T_CreateEnumStmt,
 	T_CreateRangeStmt,
 	T_AlterEnumStmt,
+	T_DictMapExprElem,
+	T_DictMapElem,
+	T_DictMapCase,
 	T_AlterTSDictionaryStmt,
 	T_AlterTSConfigurationStmt,
 	T_CreateFdwStmt,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 699fa77bc7..6103b12cce 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3434,6 +3434,50 @@ typedef enum AlterTSConfigType
 	ALTER_TSCONFIG_DROP_MAPPING
 } AlterTSConfigType;
 
+/*
+ * TS Configuration expression tree element's types
+ */
+typedef enum DictMapElemType
+{
+	DICT_MAP_CASE,
+	DICT_MAP_EXPRESSION,
+	DICT_MAP_KEEP,
+	DICT_MAP_DICTIONARY
+} DictMapElemType;
+
+/*
+ * TS Configuration expression tree abstract element
+ */
+typedef struct DictMapElem
+{
+	NodeTag		type;
+	int8		kind;			/* See DictMapElemType */
+	void	   *data;			/* Type should be detected by kind value */
+} DictMapElem;
+
+/*
+ * TS Configuration expression tree element with operator and operands
+ */
+typedef struct DictMapExprElem
+{
+	NodeTag		type;
+	DictMapElem *left;
+	DictMapElem *right;
+	int8		oper;
+} DictMapExprElem;
+
+/*
+ * TS Configuration expression tree CASE element
+ */
+typedef struct DictMapCase
+{
+	NodeTag		type;
+	struct DictMapElem *condition;
+	struct DictMapElem *command;
+	struct DictMapElem *elsebranch;
+	bool		match;
+} DictMapCase;
+
 typedef struct AlterTSConfigurationStmt
 {
 	NodeTag		type;
@@ -3446,6 +3490,7 @@ typedef struct AlterTSConfigurationStmt
 	 */
 	List	   *tokentype;		/* list of Value strings */
 	List	   *dicts;			/* list of list of Value strings */
+	DictMapElem *dict_map;		/* tree of the mapping expression */
 	bool		override;		/* if true - remove old variant */
 	bool		replace;		/* if true - replace dictionary by another */
 	bool		missing_ok;		/* for DROP - skip error if missing? */
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 4dff55a8e9..3371f286a8 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -220,6 +220,7 @@ PG_KEYWORD("is", IS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isnull", ISNULL, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isolation", ISOLATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("join", JOIN, TYPE_FUNC_NAME_KEYWORD)
+PG_KEYWORD("keep", KEEP, RESERVED_KEYWORD)
 PG_KEYWORD("key", KEY, UNRESERVED_KEYWORD)
 PG_KEYWORD("label", LABEL, UNRESERVED_KEYWORD)
 PG_KEYWORD("language", LANGUAGE, UNRESERVED_KEYWORD)
@@ -242,6 +243,7 @@ PG_KEYWORD("location", LOCATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("lock", LOCK_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("locked", LOCKED, UNRESERVED_KEYWORD)
 PG_KEYWORD("logged", LOGGED, UNRESERVED_KEYWORD)
+PG_KEYWORD("map", MAP, UNRESERVED_KEYWORD)
 PG_KEYWORD("mapping", MAPPING, UNRESERVED_KEYWORD)
 PG_KEYWORD("match", MATCH, UNRESERVED_KEYWORD)
 PG_KEYWORD("matched", MATCHED, UNRESERVED_KEYWORD)
diff --git a/src/include/tsearch/ts_cache.h b/src/include/tsearch/ts_cache.h
index 410f1d54af..4633dd7618 100644
--- a/src/include/tsearch/ts_cache.h
+++ b/src/include/tsearch/ts_cache.h
@@ -14,6 +14,7 @@
 #define TS_CACHE_H
 
 #include "utils/guc.h"
+#include "catalog/pg_ts_config_map.h"
 
 
 /*
@@ -66,6 +67,7 @@ typedef struct
 {
 	int			len;
 	Oid		   *dictIds;
+	int32	   *dictOptions;
 } ListDictionary;
 
 typedef struct
@@ -77,7 +79,7 @@ typedef struct
 	Oid			prsId;
 
 	int			lenmap;
-	ListDictionary *map;
+	TSMapElement **map;
 } TSConfigCacheEntry;
 
 
diff --git a/src/include/tsearch/ts_configmap.h b/src/include/tsearch/ts_configmap.h
new file mode 100644
index 0000000000..79e618052e
--- /dev/null
+++ b/src/include/tsearch/ts_configmap.h
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.h
+ *	  internal representation of text search configuration and utilities for it
+ *
+ * Copyright (c) 1998-2018, PostgreSQL Global Development Group
+ *
+ * src/include/tsearch/ts_utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PG_TS_CONFIGMAP_H_
+#define _PG_TS_CONFIGMAP_H_
+
+#include "utils/jsonb.h"
+#include "catalog/pg_ts_config_map.h"
+
+/*
+ * Configuration storage functions
+ * Provide interface to convert ts_configuration into JSONB and vice versa
+ */
+
+/* Convert TSMapElement structure into JSONB */
+extern Jsonb *TSMapToJsonb(TSMapElement *config);
+
+/* Extract TSMapElement from JSONB formated data */
+extern TSMapElement *JsonbToTSMap(Jsonb *json);
+/* Replace all occurances of oldDict by newDict */
+extern void TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict);
+
+/* Move rule list into specified memory context */
+extern TSMapElement *TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context);
+/* Free all nodes of the rule list */
+extern void TSMapElementFree(TSMapElement *element);
+
+/* Print map in human-readable format */
+extern void TSMapPrintElement(TSMapElement *config, StringInfo result);
+
+/* Print dictionary name for a given Oid */
+extern void TSMapPrintDictName(Oid dictId, StringInfo result);
+
+/* Return all dictionaries used in config */
+extern Oid *TSMapGetDictionaries(TSMapElement *config);
+
+/* Do a deep comparison of two TSMapElements. Doesn't check parents of elements */
+extern bool TSMapElementEquals(TSMapElement *a, TSMapElement *b);
+
+#endif							/* _PG_TS_CONFIGMAP_H_ */
diff --git a/src/include/tsearch/ts_public.h b/src/include/tsearch/ts_public.h
index 0b7a5aa68e..d970eec0ab 100644
--- a/src/include/tsearch/ts_public.h
+++ b/src/include/tsearch/ts_public.h
@@ -115,6 +115,7 @@ typedef struct
 #define TSL_ADDPOS		0x01
 #define TSL_PREFIX		0x02
 #define TSL_FILTER		0x04
+#define TSL_MULTI		0x08
 
 /*
  * Struct for supporting complex dictionaries like thesaurus.
diff --git a/src/test/regress/expected/oidjoins.out b/src/test/regress/expected/oidjoins.out
index d56c70c847..08c2674d46 100644
--- a/src/test/regress/expected/oidjoins.out
+++ b/src/test/regress/expected/oidjoins.out
@@ -1089,14 +1089,6 @@ WHERE	mapcfg != 0 AND
 ------+--------
 (0 rows)
 
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
- ctid | mapdict 
-------+---------
-(0 rows)
-
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/expected/tsdicts.out b/src/test/regress/expected/tsdicts.out
index 0c1d7c7675..512af5975e 100644
--- a/src/test/regress/expected/tsdicts.out
+++ b/src/test/regress/expected/tsdicts.out
@@ -420,6 +420,105 @@ SELECT ts_lexize('thesaurus', 'one');
  {1}
 (1 row)
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+SELECT to_tsvector('english_union', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'books');
+    to_tsvector     
+--------------------
+ 'book':1 'books':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+SELECT to_tsvector('english_intersect', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'books');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'booking');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+SELECT to_tsvector('english_except', 'book');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_except', 'books');
+ to_tsvector 
+-------------
+ 'books':1
+(1 row)
+
+SELECT to_tsvector('english_except', 'booking');
+ to_tsvector 
+-------------
+ 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+SELECT to_tsvector('english_branches', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -580,6 +679,163 @@ SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a
  'card':3,10 'invit':2,9 'like':6 'look':5 'order':1,8
 (1 row)
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+              to_tsvector              
+---------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                     to_tsvector                     
+-----------------------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5 'supernova':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                              to_tsvector                               
+------------------------------------------------------------------------
+ '1987a':6 'mysterious':2 'of':4 'rings':3 'sn':5 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+            Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |                     Dictionaries                      
+-----------------+-------------------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN MATCH THEN simple UNION thesaurus+
+                 | ELSE simple                                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+      to_tsvector       
+------------------------
+ '12':1 'one':1 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+            to_tsvector            
+-----------------------------------
+ '123':1 'one':1 'three':3 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+           to_tsvector           
+---------------------------------
+ '12':1 'four':3 'one':1 'two':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+      Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |               Dictionaries               
+-----------------+------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN NO MATCH THEN simple+
+                 | ELSE thesaurus                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector    
+------------------
+ '12':1 'books':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector   
+-----------------
+ '12':1 'book':2
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION operators_tst (
+						COPY=thesaurus_tst
+);
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A');
+                                     to_tsvector                                      
+--------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION (synonym, simple);
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A Postgres');
+                                                to_tsvector                                                
+-----------------------------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'pgsql':7 'postgr':7 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/expected/tsearch.out b/src/test/regress/expected/tsearch.out
index d63fb12f1d..c0e9fc5c8f 100644
--- a/src/test/regress/expected/tsearch.out
+++ b/src/test/regress/expected/tsearch.out
@@ -36,11 +36,11 @@ WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 -----+---------
 (0 rows)
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
- mapcfg | maptokentype | mapseqno 
---------+--------------+----------
+WHERE mapcfg = 0;
+ mapcfg | maptokentype 
+--------+--------------
 (0 rows)
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
@@ -51,8 +51,8 @@ RIGHT JOIN pg_ts_config_map AS m
     ON (tt.cfgid=m.mapcfg AND tt.tokid=m.maptokentype)
 WHERE
     tt.cfgid IS NULL OR tt.tokid IS NULL;
- cfgid | tokid | mapcfg | maptokentype | mapseqno | mapdict 
--------+-------+--------+--------------+----------+---------
+ cfgid | tokid | mapcfg | maptokentype | mapdicts 
+-------+-------+--------+--------------+----------
 (0 rows)
 
 -- test basic text search behavior without indexes, then with
@@ -567,55 +567,55 @@ SELECT length(to_tsvector('english', '345 qwe@efd.r '' http://www.com/ http://ae
 
 -- ts_debug
 SELECT * from ts_debug('english', '<myns:foo-bar_baz.blurfl>abc&nm1;def&#xa9;ghi&#245;jkl</myns:foo-bar_baz.blurfl>');
-   alias   |   description   |           token            |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+----------------------------+----------------+--------------+---------
- tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |              | 
- asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem | {abc}
- entity    | XML entity      | &nm1;                      | {}             |              | 
- asciiword | Word, all ASCII | def                        | {english_stem} | english_stem | {def}
- entity    | XML entity      | &#xa9;                     | {}             |              | 
- asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem | {ghi}
- entity    | XML entity      | &#245;                     | {}             |              | 
- asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem | {jkl}
- tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |              | 
+   alias   |   description   |           token            |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+----------------------------+----------------+---------------+--------------+---------
+ tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |               |              | 
+ asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem  | english_stem | {abc}
+ entity    | XML entity      | &nm1;                      | {}             |               |              | 
+ asciiword | Word, all ASCII | def                        | {english_stem} | english_stem  | english_stem | {def}
+ entity    | XML entity      | &#xa9;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem  | english_stem | {ghi}
+ entity    | XML entity      | &#245;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem  | english_stem | {jkl}
+ tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |               |              | 
 (9 rows)
 
 -- check parsing of URLs
 SELECT * from ts_debug('english', 'http://www.harewoodsolutions.co.uk/press.aspx</span>');
-  alias   |  description  |                 token                  | dictionaries | dictionary |                 lexemes                  
-----------+---------------+----------------------------------------+--------------+------------+------------------------------------------
- protocol | Protocol head | http://                                | {}           |            | 
- url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple     | {www.harewoodsolutions.co.uk/press.aspx}
- host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple     | {www.harewoodsolutions.co.uk}
- url_path | URL path      | /press.aspx                            | {simple}     | simple     | {/press.aspx}
- tag      | XML tag       | </span>                                | {}           |            | 
+  alias   |  description  |                 token                  | dictionaries | configuration | command |                 lexemes                  
+----------+---------------+----------------------------------------+--------------+---------------+---------+------------------------------------------
+ protocol | Protocol head | http://                                | {}           |               |         | 
+ url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk/press.aspx}
+ host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk}
+ url_path | URL path      | /press.aspx                            | {simple}     | simple        | simple  | {/press.aspx}
+ tag      | XML tag       | </span>                                | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
-  alias   |  description  |           token            | dictionaries | dictionary |           lexemes            
-----------+---------------+----------------------------+--------------+------------+------------------------------
- protocol | Protocol head | http://                    | {}           |            | 
- url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple     | {aew.wer0c.ewr/id?ad=qwe&dw}
- host     | Host          | aew.wer0c.ewr              | {simple}     | simple     | {aew.wer0c.ewr}
- url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple     | {/id?ad=qwe&dw}
- tag      | XML tag       | <span>                     | {}           |            | 
+  alias   |  description  |           token            | dictionaries | configuration | command |           lexemes            
+----------+---------------+----------------------------+--------------+---------------+---------+------------------------------
+ protocol | Protocol head | http://                    | {}           |               |         | 
+ url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple        | simple  | {aew.wer0c.ewr/id?ad=qwe&dw}
+ host     | Host          | aew.wer0c.ewr              | {simple}     | simple        | simple  | {aew.wer0c.ewr}
+ url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple        | simple  | {/id?ad=qwe&dw}
+ tag      | XML tag       | <span>                     | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
-  alias   |  description  |        token         | dictionaries | dictionary |        lexemes         
-----------+---------------+----------------------+--------------+------------+------------------------
- protocol | Protocol head | http://              | {}           |            | 
- url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple     | {5aew.werc.ewr:8100/?}
- host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path      | /?                   | {simple}     | simple     | {/?}
+  alias   |  description  |        token         | dictionaries | configuration | command |        lexemes         
+----------+---------------+----------------------+--------------+---------------+---------+------------------------
+ protocol | Protocol head | http://              | {}           |               |         | 
+ url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?}
+ host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path      | /?                   | {simple}     | simple        | simple  | {/?}
 (4 rows)
 
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
-  alias   | description |         token          | dictionaries | dictionary |         lexemes          
-----------+-------------+------------------------+--------------+------------+--------------------------
- url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple     | {5aew.werc.ewr:8100/?xx}
- host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path    | /?xx                   | {simple}     | simple     | {/?xx}
+  alias   | description |         token          | dictionaries | configuration | command |         lexemes          
+----------+-------------+------------------------+--------------+---------------+---------+--------------------------
+ url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?xx}
+ host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path    | /?xx                   | {simple}     | simple        | simple  | {/?xx}
 (3 rows)
 
 SELECT token, alias,
diff --git a/src/test/regress/sql/oidjoins.sql b/src/test/regress/sql/oidjoins.sql
index 656cace451..4e6730fa69 100644
--- a/src/test/regress/sql/oidjoins.sql
+++ b/src/test/regress/sql/oidjoins.sql
@@ -545,10 +545,6 @@ SELECT	ctid, mapcfg
 FROM	pg_catalog.pg_ts_config_map fk
 WHERE	mapcfg != 0 AND
 	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_config pk WHERE pk.oid = fk.mapcfg);
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/sql/tsdicts.sql b/src/test/regress/sql/tsdicts.sql
index 1633c0d066..080ddc486a 100644
--- a/src/test/regress/sql/tsdicts.sql
+++ b/src/test/regress/sql/tsdicts.sql
@@ -117,6 +117,57 @@ CREATE TEXT SEARCH DICTIONARY thesaurus (
 
 SELECT ts_lexize('thesaurus', 'one');
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+
+SELECT to_tsvector('english_union', 'book');
+SELECT to_tsvector('english_union', 'books');
+SELECT to_tsvector('english_union', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+
+SELECT to_tsvector('english_intersect', 'book');
+SELECT to_tsvector('english_intersect', 'books');
+SELECT to_tsvector('english_intersect', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+
+SELECT to_tsvector('english_except', 'book');
+SELECT to_tsvector('english_except', 'books');
+SELECT to_tsvector('english_except', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+
+SELECT to_tsvector('english_branches', 'book');
+SELECT to_tsvector('english_branches', 'books');
+SELECT to_tsvector('english_branches', 'booking');
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -189,6 +240,50 @@ SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one');
 SELECT to_tsvector('thesaurus_tst', 'Supernovae star is very new star and usually called supernovae (abbreviation SN)');
 SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a tickets');
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
+CREATE TEXT SEARCH CONFIGURATION operators_tst (
+						COPY=thesaurus_tst
+);
+
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION (synonym, simple);
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A Postgres');
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/sql/tsearch.sql b/src/test/regress/sql/tsearch.sql
index 1c8520b3e9..6f8af63c1a 100644
--- a/src/test/regress/sql/tsearch.sql
+++ b/src/test/regress/sql/tsearch.sql
@@ -26,9 +26,9 @@ SELECT oid, cfgname
 FROM pg_ts_config
 WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
+WHERE mapcfg = 0;
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
 SELECT * FROM
#26Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Aleksandr Parfenov (#25)
1 attachment(s)
Re: Flexible configuration for full-text search

Hi,

After last commits related to storing initial data-set of catalog and
commits related to MERGE command with changes in gram.y the patch
doesn't apply. A rebased in the attachment.

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

Attachments:

0001-flexible-fts-configuration-v12.patchtext/x-patchDownload
diff --git a/contrib/unaccent/expected/unaccent.out b/contrib/unaccent/expected/unaccent.out
index b93105e..37b9337 100644
--- a/contrib/unaccent/expected/unaccent.out
+++ b/contrib/unaccent/expected/unaccent.out
@@ -61,3 +61,14 @@ SELECT ts_lexize('unaccent', '
  {����}
 (1 row)
 
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
+         to_tsvector          
+------------------------------
+ 'foobar':1 '�����':2 '���':3
+(1 row)
+
diff --git a/contrib/unaccent/sql/unaccent.sql b/contrib/unaccent/sql/unaccent.sql
index 3102139..6ce21cd 100644
--- a/contrib/unaccent/sql/unaccent.sql
+++ b/contrib/unaccent/sql/unaccent.sql
@@ -2,7 +2,6 @@ CREATE EXTENSION unaccent;
 
 -- must have a UTF8 database
 SELECT getdatabaseencoding();
-
 SET client_encoding TO 'KOI8';
 
 SELECT unaccent('foobar');
@@ -16,3 +15,12 @@ SELECT unaccent('unaccent', '
 SELECT ts_lexize('unaccent', 'foobar');
 SELECT ts_lexize('unaccent', '����');
 SELECT ts_lexize('unaccent', '����');
+
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml
index ebe0b94..ecc3704 100644
--- a/doc/src/sgml/ref/alter_tsconfig.sgml
+++ b/doc/src/sgml/ref/alter_tsconfig.sgml
@@ -22,8 +22,12 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING REPLACE <replaceable class="parameter">old_dictionary</replaceable> WITH <replaceable class="parameter">new_dictionary</replaceable>
@@ -89,6 +93,17 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
    </varlistentry>
 
    <varlistentry>
+    <term><replaceable class="parameter">config</replaceable></term>
+    <listitem>
+     <para>
+      The dictionaries tree expression. The dictionary expression
+      is a triple of condition/command/else that define way to process
+      the text. The <literal>ELSE</literal> part is optional.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry>
     <term><replaceable class="parameter">old_dictionary</replaceable></term>
     <listitem>
      <para>
@@ -133,7 +148,7 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
      </para>
     </listitem>
    </varlistentry>
- </variablelist>
+  </variablelist>
 
   <para>
    The <literal>ADD MAPPING FOR</literal> form installs a list of dictionaries to be
@@ -155,6 +170,53 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
  </refsect1>
 
  <refsect1>
+  <title>Dictionaries Map Configuration</title>
+
+  <refsect2>
+   <title>Format</title>
+   <para>
+    Formally <replaceable class="parameter">config</replaceable> is one of:
+   </para>
+   <programlisting>
+    * dictionary_name
+
+    * config { UNION | INTERSECT | EXCEPT | MAP } config
+
+    * CASE config
+        WHEN [ NO ] MATCH THEN { KEEP | config }
+        [ ELSE config ]
+      END
+   </programlisting>
+  </refsect2>
+
+  <refsect2>
+   <title>Description</title>
+   <para>
+    <replaceable class="parameter">config</replaceable> can be used
+    in three different formats. The most simple format is name of dictionary to
+    use for tokens processing.
+   </para>
+   <para>
+    In order to use more than one dictionary
+    simultaneously user should interconnect dictionaries by operators. Operators
+    <literal>UNION</literal>, <literal>EXCEPT</literal> and
+    <literal>INTERSECT</literal> have same meaning as in operations on sets.
+    Special operator <literal>MAP</literal> gets output of left subexpression
+    and uses it as an input to right subexpression.
+   </para>
+   <para>
+    The third format of <replaceable class="parameter">config</replaceable> is similar to
+    <literal>CASE/WHEN/THEN/ELSE</literal> structure. It's consists of three
+    replaceable parts. First one is configuration which is used to construct lexemes set
+    for matching condition. If the condition is triggered, the command is executed.
+    Use command <literal>KEEP</literal> to avoid repeating of the same
+    configuration in condition and command part. However, command may differ from
+    the condition. The <literal>ELSE</literal> branch is executed otherwise.
+   </para>
+  </refsect2>
+ </refsect1>
+
+ <refsect1>
   <title>Examples</title>
 
   <para>
@@ -167,6 +229,34 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
 ALTER TEXT SEARCH CONFIGURATION my_config
   ALTER MAPPING REPLACE english WITH swedish;
 </programlisting>
+
+  <para>
+   Next example shows how to analyse documents in both English and German languages.
+   <literal>english_hunspell</literal> and <literal>german_hunspell</literal>
+   return result only if a word is recognized. Otherwise, stemmer dictionaries
+   are used to process a token.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH
+   CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+    UNION
+   CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+    In order to combine search for both exact and processed forms the vector
+    should contain lexemes produced by <literal>simple</literal> for exact form
+    of the word as well as lexemes produced by linguistic-aware dictionary
+    (e.g. <literal>english_stem</literal>) for processed forms.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml
index 19f5851..de14dae 100644
--- a/doc/src/sgml/textsearch.sgml
+++ b/doc/src/sgml/textsearch.sgml
@@ -732,10 +732,11 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     The <function>to_tsvector</function> function internally calls a parser
     which breaks the document text into tokens and assigns a type to
     each token.  For each token, a list of
-    dictionaries (<xref linkend="textsearch-dictionaries"/>) is consulted,
-    where the list can vary depending on the token type.  The first dictionary
-    that <firstterm>recognizes</firstterm> the token emits one or more normalized
-    <firstterm>lexemes</firstterm> to represent the token.  For example,
+    condition/command pairs is consulted, where the list can vary depending
+    on the token type, condition and command are expressions on dictionaries
+    with matching clause in condition(<xref linkend="textsearch-dictionaries"/>).
+    The first command combined with true-resulted condition emits one or more normalized
+    <firstterm>lexemes</firstterm> to represent the token. For example,
     <literal>rats</literal> became <literal>rat</literal> because one of the
     dictionaries recognized that the word <literal>rats</literal> is a plural
     form of <literal>rat</literal>.  Some words are recognized as
@@ -743,7 +744,7 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     causes them to be ignored since they occur too frequently to be useful in
     searching.  In our example these are
     <literal>a</literal>, <literal>on</literal>, and <literal>it</literal>.
-    If no dictionary in the list recognizes the token then it is also ignored.
+    If none of conditions is <literal>true</literal> the token is ignored.
     In this example that happened to the punctuation sign <literal>-</literal>
     because there are in fact no dictionaries assigned for its token type
     (<literal>Space symbols</literal>), meaning space tokens will never be
@@ -2316,8 +2317,8 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
      <para>
       a single lexeme with the <literal>TSL_FILTER</literal> flag set, to replace
       the original token with a new token to be passed to subsequent
-      dictionaries (a dictionary that does this is called a
-      <firstterm>filtering dictionary</firstterm>)
+      dictionaries in a comma-separated syntax (a dictionary that does this
+      is called a <firstterm>filtering dictionary</firstterm>)
      </para>
     </listitem>
     <listitem>
@@ -2349,38 +2350,126 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
    type that the parser can return, a separate list of dictionaries is
    specified by the configuration.  When a token of that type is found
    by the parser, each dictionary in the list is consulted in turn,
-   until some dictionary recognizes it as a known word.  If it is identified
-   as a stop word, or if no dictionary recognizes the token, it will be
-   discarded and not indexed or searched for.
-   Normally, the first dictionary that returns a non-<literal>NULL</literal>
-   output determines the result, and any remaining dictionaries are not
-   consulted; but a filtering dictionary can replace the given word
-   with a modified word, which is then passed to subsequent dictionaries.
+   until command is not selected based on its condition. If none of cases is
+   selected token will be discarded and not indexed or searched for.
   </para>
 
   <para>
-   The general rule for configuring a list of dictionaries
-   is to place first the most narrow, most specific dictionary, then the more
-   general dictionaries, finishing with a very general dictionary, like
+   A tree of cases is described as condition/command/else triples. Each
+   condition is evaluated in order to select appropriate command to generate
+   resulted set of lexemes.
+  </para>
+
+  <para>
+   A condition is an expression with dictionaries used as operands and
+   basic set operators <literal>UNION</literal>, <literal>EXCEPT</literal>, <literal>INTERSECT</literal>
+   and special operator <literal>MAP</literal>.
+   Special operator <literal>MAP</literal> use output of left subexpression as
+   input for right subexpression.
+  </para>
+
+  <para>
+    Rules to write command are same as for condition with additional keyword
+    <literal>KEEP</literal> considered to use the result of the condition as an output.
+  </para>
+
+  <para>
+   A comma-separated list of dictionaries is a simplified variant of text
+   search configuration. Each dictionary consulted to process a token and first
+   non-<literal>NULL</literal> output is accepted as a processing result.
+  </para>
+
+  <para>
+   The general rule for configuring tokens processing
+   is to place first case with the most narrow, most specific dictionary, then the more
+   general dictionaries, finishing with a very general dictionaries, like
    a <application>Snowball</application> stemmer or <literal>simple</literal>, which
-   recognizes everything.  For example, for an astronomy-specific search
+   recognizes everything. For example, for an astronomy-specific search
    (<literal>astro_en</literal> configuration) one could bind token type
    <type>asciiword</type> (ASCII word) to a synonym dictionary of astronomical
    terms, a general English dictionary and a <application>Snowball</application> English
-   stemmer:
+   stemmer in comma-separated variant of mapping:
+  </para>
 
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION astro_en
     ADD MAPPING FOR asciiword WITH astrosyn, english_ispell, english_stem;
 </programlisting>
+
+  <para>
+   Another example is a configuration for both English and German languages via
+   operator-separated variant of mapping:
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION multi_en_de
+    ADD MAPPING FOR asciiword, word WITH
+        CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+         UNION
+        CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+   This configuration provides an ability to search on collection of multilingual
+   documents without specifying language:
+  </para>
+
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'lack');
+ id |                   txt
+----+-----------------------------------------
+  2 | with old stars and lacking gas and dust
+
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'beginnen');
+ id |                 txt
+----+-------------------------------------
+  1 | Das geschah zu Beginn dieses Monats
+</programlisting>
+
+  <para>
+   A combination of stemmer dictionary with <literal>simple</literal> one may be used to mix
+   search for exact form of one word and linguistic search for others.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION exact_and_linguistic
+    ADD MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
+  <para>
+   In the following example a simple dictionary is used to prevent words from normalization in query.
   </para>
 
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Supernova star'),
+                              (2, 'Supernova stars'))
+SELECT * FROM docs WHERE to_tsvector('exact_and_linguistic', txt) @@ (to_tsquery('simple', 'stars') &amp;&amp; to_tsquery('english', 'supernovae'));
+ id |       txt       
+----+-----------------
+  2 | Supernova stars
+</programlisting>
+
+   <caution>
+    <para>
+     Due to lack of information about origin of each lexeme in <literal>tsvector</literal> may
+     lead to false-positive triggers in case of stemmed form being used as exact form in a query.
+    </para>
+   </caution>
+
   <para>
-   A filtering dictionary can be placed anywhere in the list, except at the
-   end where it'd be useless.  Filtering dictionaries are useful to partially
+   Filtering dictionaries are useful to partially
    normalize words to simplify the task of later dictionaries.  For example,
    a filtering dictionary could be used to remove accents from accented
    letters, as is done by the <xref linkend="unaccent"/> module.
+   Filter dictionary should be placed at left of <literal>MAP</literal>
+   operator. If filter dictionary returns <literal>NULL</literal> it pass initial token
+   to the right subexpression.
   </para>
 
   <sect2 id="textsearch-stopwords">
@@ -2547,9 +2636,9 @@ SELECT ts_lexize('public.simple_dict','The');
 
 <screen>
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | Paris | {english_stem} | english_stem  | english_stem | {pari}
 
 CREATE TEXT SEARCH DICTIONARY my_synonym (
     TEMPLATE = synonym,
@@ -2561,9 +2650,12 @@ ALTER TEXT SEARCH CONFIGURATION english
     WITH my_synonym, english_stem;
 
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |       dictionaries        | dictionary | lexemes 
------------+-----------------+-------+---------------------------+------------+---------
- asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris}
+   alias   |   description   | token |       dictionaries        |                configuration                |  command   | lexemes 
+-----------+-----------------+-------+---------------------------+---------------------------------------------+------------+---------
+ asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | CASE my_synonym WHEN MATCH THEN KEEP       +| my_synonym | {paris}
+           |                 |       |                           | ELSE CASE english_stem WHEN MATCH THEN KEEP+|            | 
+           |                 |       |                           | END                                        +|            | 
+           |                 |       |                           | END                                         |            | 
 </screen>
    </para>
 
@@ -3192,6 +3284,21 @@ CREATE TEXT SEARCH DICTIONARY english_ispell (
 ALTER TEXT SEARCH CONFIGURATION pg
     ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
                       word, hword, hword_part
+    WITH 
+      CASE pg_dict WHEN MATCH THEN KEEP
+      ELSE
+          CASE english_ispell WHEN MATCH THEN KEEP
+          ELSE english_stem
+          END
+      END;
+</programlisting>
+
+    Or use alternative comma-separated syntax:
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION pg
+    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
+                      word, hword, hword_part
     WITH pg_dict, english_ispell, english_stem;
 </programlisting>
 
@@ -3267,7 +3374,8 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
          OUT <replaceable class="parameter">description</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">token</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">dictionaries</replaceable> <type>regdictionary[]</type>,
-         OUT <replaceable class="parameter">dictionary</replaceable> <type>regdictionary</type>,
+         OUT <replaceable class="parameter">configuration</replaceable> <type>text</type>,
+         OUT <replaceable class="parameter">command</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)
          returns setof record
 </synopsis>
@@ -3311,14 +3419,20 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionary</replaceable> <type>regdictionary</type> &mdash; the dictionary
-       that recognized the token, or <literal>NULL</literal> if none did
+       <replaceable>configuration</replaceable> <type>text</type> &mdash; the
+       configuration defined for this token type
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       <replaceable>command</replaceable> <type>text</type> &mdash; the command that describes
+       the way the output was produced
       </para>
      </listitem>
      <listitem>
       <para>
        <replaceable>lexemes</replaceable> <type>text[]</type> &mdash; the lexeme(s) produced
-       by the dictionary that recognized the token, or <literal>NULL</literal> if
+       by the command selected according conditions, or <literal>NULL</literal> if
        none did; an empty array (<literal>{}</literal>) means it was recognized as a
        stop word
       </para>
@@ -3331,32 +3445,32 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
 
 <screen>
 SELECT * FROM ts_debug('english','a fat  cat sat on a mat - it ate a fat rats');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | cat   | {english_stem} | english_stem | {cat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | sat   | {english_stem} | english_stem | {sat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | on    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | mat   | {english_stem} | english_stem | {mat}
- blank     | Space symbols   |       | {}             |              | 
- blank     | Space symbols   | -     | {}             |              | 
- asciiword | Word, all ASCII | it    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | ate   | {english_stem} | english_stem | {ate}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | rats  | {english_stem} | english_stem | {rat}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | cat   | {english_stem} | english_stem  | english_stem | {cat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | sat   | {english_stem} | english_stem  | english_stem | {sat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | on    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | mat   | {english_stem} | english_stem  | english_stem | {mat}
+ blank     | Space symbols   |       |                |               |              | 
+ blank     | Space symbols   | -     |                |               |              | 
+ asciiword | Word, all ASCII | it    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | ate   | {english_stem} | english_stem  | english_stem | {ate}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | rats  | {english_stem} | english_stem  | english_stem | {rat}
 </screen>
   </para>
 
@@ -3382,13 +3496,22 @@ ALTER TEXT SEARCH CONFIGURATION public.english
 
 <screen>
 SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
-   alias   |   description   |    token    |         dictionaries          |   dictionary   |   lexemes   
------------+-----------------+-------------+-------------------------------+----------------+-------------
- asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | english_ispell | {}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | english_ispell | {bright}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | english_stem   | {supernova}
+   alias   |   description   |    token    |         dictionaries          |                configuration                |     command      |   lexemes   
+-----------+-----------------+-------------+-------------------------------+---------------------------------------------+------------------+-------------
+ asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {bright}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_stem     | {supernova}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
 </screen>
 
   <para>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 85a17a4..a3da25e 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -948,55 +948,14 @@ GRANT SELECT (subdbid, subname, subowner, subenabled, subslotname, subpublicatio
 -- Tsearch debug function.  Defined here because it'd be pretty unwieldy
 -- to put it into pg_proc.h
 
-CREATE FUNCTION ts_debug(IN config regconfig, IN document text,
-    OUT alias text,
-    OUT description text,
-    OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
-    OUT lexemes text[])
-RETURNS SETOF record AS
-$$
-SELECT
-    tt.alias AS alias,
-    tt.description AS description,
-    parse.token AS token,
-    ARRAY ( SELECT m.mapdict::pg_catalog.regdictionary
-            FROM pg_catalog.pg_ts_config_map AS m
-            WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-            ORDER BY m.mapseqno )
-    AS dictionaries,
-    ( SELECT mapdict::pg_catalog.regdictionary
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS dictionary,
-    ( SELECT pg_catalog.ts_lexize(mapdict, parse.token)
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS lexemes
-FROM pg_catalog.ts_parse(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 ), $2
-    ) AS parse,
-     pg_catalog.ts_token_type(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 )
-    ) AS tt
-WHERE tt.tokid = parse.tokid
-$$
-LANGUAGE SQL STRICT STABLE PARALLEL SAFE;
-
-COMMENT ON FUNCTION ts_debug(regconfig,text) IS
-    'debug function for text search configuration';
 
 CREATE FUNCTION ts_debug(IN document text,
     OUT alias text,
     OUT description text,
     OUT token text,
     OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
+    OUT configuration text,
+    OUT command text,
     OUT lexemes text[])
 RETURNS SETOF record AS
 $$
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index 3a84351..53ee576 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -39,9 +39,12 @@
 #include "nodes/makefuncs.h"
 #include "parser/parse_func.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_public.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/jsonb.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "utils/syscache.h"
@@ -935,11 +938,22 @@ makeConfigurationDependencies(HeapTuple tuple, bool removeOld,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			TSMapElement *mapdicts = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			Oid		   *dictionaryOids = TSMapGetDictionaries(mapdicts);
+			Oid		   *currentOid = dictionaryOids;
 
-			referenced.classId = TSDictionaryRelationId;
-			referenced.objectId = cfgmap->mapdict;
-			referenced.objectSubId = 0;
-			add_exact_object_address(&referenced, addrs);
+			while (*currentOid != InvalidOid)
+			{
+				referenced.classId = TSDictionaryRelationId;
+				referenced.objectId = *currentOid;
+				referenced.objectSubId = 0;
+				add_exact_object_address(&referenced, addrs);
+
+				currentOid++;
+			}
+
+			pfree(dictionaryOids);
+			TSMapElementFree(mapdicts);
 		}
 
 		systable_endscan(scan);
@@ -1091,8 +1105,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			mapvalues[Anum_pg_ts_config_map_mapcfg - 1] = cfgOid;
 			mapvalues[Anum_pg_ts_config_map_maptokentype - 1] = cfgmap->maptokentype;
-			mapvalues[Anum_pg_ts_config_map_mapseqno - 1] = cfgmap->mapseqno;
-			mapvalues[Anum_pg_ts_config_map_mapdict - 1] = cfgmap->mapdict;
+			mapvalues[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(&cfgmap->mapdicts);
 
 			newmaptup = heap_form_tuple(mapRel->rd_att, mapvalues, mapnulls);
 
@@ -1195,7 +1208,7 @@ AlterTSConfiguration(AlterTSConfigurationStmt *stmt)
 	relMap = heap_open(TSConfigMapRelationId, RowExclusiveLock);
 
 	/* Add or drop mappings */
-	if (stmt->dicts)
+	if (stmt->dicts || stmt->dict_map)
 		MakeConfigurationMapping(stmt, tup, relMap);
 	else if (stmt->tokentype)
 		DropConfigurationMapping(stmt, tup, relMap);
@@ -1271,6 +1284,59 @@ getTokenTypes(Oid prsId, List *tokennames)
 }
 
 /*
+ * Parse parse node extracted from dictionary mapping and transform it into
+ * internal representation of dictionary mapping.
+ */
+static TSMapElement *
+ParseTSMapConfig(DictMapElem *elem)
+{
+	TSMapElement *result = palloc0(sizeof(TSMapElement));
+
+	if (elem->kind == DICT_MAP_CASE)
+	{
+		TSMapCase  *caseObject = palloc0(sizeof(TSMapCase));
+		DictMapCase *caseASTObject = elem->data;
+
+		caseObject->condition = ParseTSMapConfig(caseASTObject->condition);
+		caseObject->command = ParseTSMapConfig(caseASTObject->command);
+
+		if (caseASTObject->elsebranch)
+			caseObject->elsebranch = ParseTSMapConfig(caseASTObject->elsebranch);
+
+		caseObject->match = caseASTObject->match;
+
+		caseObject->condition->parent = result;
+		caseObject->command->parent = result;
+
+		result->type = TSMAP_CASE;
+		result->value.objectCase = caseObject;
+	}
+	else if (elem->kind == DICT_MAP_EXPRESSION)
+	{
+		TSMapExpression *expression = palloc0(sizeof(TSMapExpression));
+		DictMapExprElem *expressionAST = elem->data;
+
+		expression->left = ParseTSMapConfig(expressionAST->left);
+		expression->right = ParseTSMapConfig(expressionAST->right);
+		expression->operator = expressionAST->oper;
+
+		result->type = TSMAP_EXPRESSION;
+		result->value.objectExpression = expression;
+	}
+	else if (elem->kind == DICT_MAP_KEEP)
+	{
+		result->value.objectExpression = NULL;
+		result->type = TSMAP_KEEP;
+	}
+	else if (elem->kind == DICT_MAP_DICTIONARY)
+	{
+		result->value.objectDictionary = get_ts_dict_oid(elem->data, false);
+		result->type = TSMAP_DICTIONARY;
+	}
+	return result;
+}
+
+/*
  * ALTER TEXT SEARCH CONFIGURATION ADD/ALTER MAPPING
  */
 static void
@@ -1286,8 +1352,9 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	Oid			prsId;
 	int		   *tokens,
 				ntoken;
-	Oid		   *dictIds;
-	int			ndict;
+	Oid		   *dictIds = NULL;
+	int			ndict = 0;
+	TSMapElement *config = NULL;
 	ListCell   *c;
 
 	prsId = ((Form_pg_ts_config) GETSTRUCT(tup))->cfgparser;
@@ -1326,15 +1393,18 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	/*
 	 * Convert list of dictionary names to array of dict OIDs
 	 */
-	ndict = list_length(stmt->dicts);
-	dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
-	i = 0;
-	foreach(c, stmt->dicts)
+	if (stmt->dicts)
 	{
-		List	   *names = (List *) lfirst(c);
+		ndict = list_length(stmt->dicts);
+		dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
+		i = 0;
+		foreach(c, stmt->dicts)
+		{
+			List	   *names = (List *) lfirst(c);
 
-		dictIds[i] = get_ts_dict_oid(names, false);
-		i++;
+			dictIds[i] = get_ts_dict_oid(names, false);
+			i++;
+		}
 	}
 
 	if (stmt->replace)
@@ -1356,6 +1426,10 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			Datum		repl_val[Natts_pg_ts_config_map];
+			bool		repl_null[Natts_pg_ts_config_map];
+			bool		repl_repl[Natts_pg_ts_config_map];
+			HeapTuple	newtup;
 
 			/*
 			 * check if it's one of target token types
@@ -1379,25 +1453,21 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 			/*
 			 * replace dictionary if match
 			 */
-			if (cfgmap->mapdict == dictOld)
-			{
-				Datum		repl_val[Natts_pg_ts_config_map];
-				bool		repl_null[Natts_pg_ts_config_map];
-				bool		repl_repl[Natts_pg_ts_config_map];
-				HeapTuple	newtup;
-
-				memset(repl_val, 0, sizeof(repl_val));
-				memset(repl_null, false, sizeof(repl_null));
-				memset(repl_repl, false, sizeof(repl_repl));
-
-				repl_val[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictNew);
-				repl_repl[Anum_pg_ts_config_map_mapdict - 1] = true;
-
-				newtup = heap_modify_tuple(maptup,
-										   RelationGetDescr(relMap),
-										   repl_val, repl_null, repl_repl);
-				CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
-			}
+			config = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			TSMapReplaceDictionary(config, dictOld, dictNew);
+
+			memset(repl_val, 0, sizeof(repl_val));
+			memset(repl_null, false, sizeof(repl_null));
+			memset(repl_repl, false, sizeof(repl_repl));
+
+			repl_val[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
+			repl_repl[Anum_pg_ts_config_map_mapdicts - 1] = true;
+
+			newtup = heap_modify_tuple(maptup,
+									   RelationGetDescr(relMap),
+									   repl_val, repl_null, repl_repl);
+			CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
+			pfree(config);
 		}
 
 		systable_endscan(scan);
@@ -1407,24 +1477,22 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		/*
 		 * Insertion of new entries
 		 */
+		config = ParseTSMapConfig(stmt->dict_map);
+
 		for (i = 0; i < ntoken; i++)
 		{
-			for (j = 0; j < ndict; j++)
-			{
-				Datum		values[Natts_pg_ts_config_map];
-				bool		nulls[Natts_pg_ts_config_map];
+			Datum		values[Natts_pg_ts_config_map];
+			bool		nulls[Natts_pg_ts_config_map];
 
-				memset(nulls, false, sizeof(nulls));
-				values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
-				values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
-				values[Anum_pg_ts_config_map_mapseqno - 1] = Int32GetDatum(j + 1);
-				values[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictIds[j]);
+			memset(nulls, false, sizeof(nulls));
+			values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
+			values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
+			values[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
 
-				tup = heap_form_tuple(relMap->rd_att, values, nulls);
-				CatalogTupleInsert(relMap, tup);
+			tup = heap_form_tuple(relMap->rd_att, values, nulls);
+			CatalogTupleInsert(relMap, tup);
 
-				heap_freetuple(tup);
-			}
+			heap_freetuple(tup);
 		}
 	}
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index b856fe2..34c4295 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4490,6 +4490,42 @@ _copyReassignOwnedStmt(const ReassignOwnedStmt *from)
 	return newnode;
 }
 
+static DictMapElem *
+_copyDictMapElem(const DictMapElem *from)
+{
+	DictMapElem *newnode = makeNode(DictMapElem);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(data);
+
+	return newnode;
+}
+
+static DictMapExprElem *
+_copyDictMapExprElem(const DictMapExprElem *from)
+{
+	DictMapExprElem *newnode = makeNode(DictMapExprElem);
+
+	COPY_NODE_FIELD(left);
+	COPY_NODE_FIELD(right);
+	COPY_SCALAR_FIELD(oper);
+
+	return newnode;
+}
+
+static DictMapCase *
+_copyDictMapCase(const DictMapCase *from)
+{
+	DictMapCase *newnode = makeNode(DictMapCase);
+
+	COPY_NODE_FIELD(condition);
+	COPY_NODE_FIELD(command);
+	COPY_NODE_FIELD(elsebranch);
+	COPY_SCALAR_FIELD(match);
+
+	return newnode;
+}
+
 static AlterTSDictionaryStmt *
 _copyAlterTSDictionaryStmt(const AlterTSDictionaryStmt *from)
 {
@@ -5512,6 +5548,15 @@ copyObjectImpl(const void *from)
 		case T_ReassignOwnedStmt:
 			retval = _copyReassignOwnedStmt(from);
 			break;
+		case T_DictMapExprElem:
+			retval = _copyDictMapExprElem(from);
+			break;
+		case T_DictMapElem:
+			retval = _copyDictMapElem(from);
+			break;
+		case T_DictMapCase:
+			retval = _copyDictMapCase(from);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _copyAlterTSDictionaryStmt(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 3994695..48a5a7e 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2222,6 +2222,36 @@ _equalReassignOwnedStmt(const ReassignOwnedStmt *a, const ReassignOwnedStmt *b)
 }
 
 static bool
+_equalDictMapElem(const DictMapElem *a, const DictMapElem *b)
+{
+	COMPARE_NODE_FIELD(data);
+	COMPARE_SCALAR_FIELD(kind);
+
+	return true;
+}
+
+static bool
+_equalDictMapExprElem(const DictMapExprElem *a, const DictMapExprElem *b)
+{
+	COMPARE_NODE_FIELD(left);
+	COMPARE_NODE_FIELD(right);
+	COMPARE_SCALAR_FIELD(oper);
+
+	return true;
+}
+
+static bool
+_equalDictMapCase(const DictMapCase *a, const DictMapCase *b)
+{
+	COMPARE_NODE_FIELD(condition);
+	COMPARE_NODE_FIELD(command);
+	COMPARE_NODE_FIELD(elsebranch);
+	COMPARE_SCALAR_FIELD(match);
+
+	return true;
+}
+
+static bool
 _equalAlterTSDictionaryStmt(const AlterTSDictionaryStmt *a, const AlterTSDictionaryStmt *b)
 {
 	COMPARE_NODE_FIELD(dictname);
@@ -3580,6 +3610,15 @@ equal(const void *a, const void *b)
 		case T_ReassignOwnedStmt:
 			retval = _equalReassignOwnedStmt(a, b);
 			break;
+		case T_DictMapExprElem:
+			retval = _equalDictMapExprElem(a, b);
+			break;
+		case T_DictMapElem:
+			retval = _equalDictMapElem(a, b);
+			break;
+		case T_DictMapCase:
+			retval = _equalDictMapCase(a, b);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _equalAlterTSDictionaryStmt(a, b);
 			break;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index dd0c26c..37dd2c5 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -52,6 +52,7 @@
 #include "catalog/namespace.h"
 #include "catalog/pg_am.h"
 #include "catalog/pg_trigger.h"
+#include "catalog/pg_ts_config_map.h"
 #include "commands/defrem.h"
 #include "commands/trigger.h"
 #include "nodes/makefuncs.h"
@@ -241,6 +242,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionSpec		*partspec;
 	PartitionBoundSpec	*partboundspec;
 	RoleSpec			*rolespec;
+	DictMapElem			*dmapelem;
 	MergeWhenClause		*mergewhen;
 }
 
@@ -311,7 +313,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				analyze_option_list analyze_option_elem
 %type <boolean>	opt_or_replace
 				opt_grant_grant_option opt_grant_admin_option
-				opt_nowait opt_if_exists opt_with_data
+				opt_nowait opt_if_exists opt_with_data opt_dictionary_map_no
 %type <ival>	opt_nowait_or_skip
 
 %type <list>	OptRoleList AlterOptRoleList
@@ -589,6 +591,12 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <list>		hash_partbound partbound_datum_list range_datum_list
 %type <defelt>		hash_partbound_elem
 
+%type <ival>		dictionary_map_set_expr_operator
+%type <dmapelem>	dictionary_map_dict dictionary_map_command_expr_paren
+					dictionary_config dictionary_map_case
+					dictionary_map_action opt_dictionary_map_case_else
+					dictionary_config_comma
+
 %type <node>	merge_when_clause opt_merge_when_and_condition
 %type <list>	merge_when_list
 
@@ -653,13 +661,13 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	JOIN
 
-	KEY
+	KEEP KEY
 
 	LABEL LANGUAGE LARGE_P LAST_P LATERAL_P
 	LEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL
 	LOCALTIME LOCALTIMESTAMP LOCATION LOCK_P LOCKED LOGGED
 
-	MAPPING MATCH MATCHED MATERIALIZED MAXVALUE MERGE METHOD
+	MAP MAPPING MATCH MATCHED MATERIALIZED MAXVALUE MERGE METHOD
 	MINUTE_P MINVALUE MODE MONTH_P MOVE
 
 	NAME_P NAMES NATIONAL NATURAL NCHAR NEW NEXT NO NONE
@@ -10377,24 +10385,26 @@ AlterTSDictionaryStmt:
 		;
 
 AlterTSConfigurationStmt:
-			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with any_name_list
+			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ADD_MAPPING;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = false;
 					n->replace = false;
 					$$ = (Node*)n;
 				}
-			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with any_name_list
+			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ALTER_MAPPING_FOR_TOKEN;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = true;
 					n->replace = false;
 					$$ = (Node*)n;
@@ -10446,6 +10456,100 @@ any_with:	WITH									{}
 			| WITH_LA								{}
 		;
 
+opt_dictionary_map_no:
+			NO { $$ = true; }
+			| { $$ = false; }
+		;
+
+dictionary_config_comma:
+			dictionary_map_dict { $$ = $1; }
+			| dictionary_map_dict ',' dictionary_config_comma
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = TSMAP_OP_COMMA;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_action:
+			KEEP
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_KEEP;
+				n->data = NULL;
+				$$ = n;
+			}
+			| dictionary_config { $$ = $1; }
+		;
+
+opt_dictionary_map_case_else:
+			ELSE dictionary_config { $$ = $2; }
+			| { $$ = NULL; }
+		;
+
+dictionary_map_case:
+			CASE dictionary_config WHEN opt_dictionary_map_no MATCH THEN dictionary_map_action opt_dictionary_map_case_else END_P
+			{
+				DictMapCase *n = makeNode(DictMapCase);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->condition = $2;
+				n->command = $7;
+				n->elsebranch = $8;
+				n->match = !$4;
+
+				r->kind = DICT_MAP_CASE;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_set_expr_operator:
+			UNION { $$ = TSMAP_OP_UNION; }
+			| EXCEPT { $$ = TSMAP_OP_EXCEPT; }
+			| INTERSECT { $$ = TSMAP_OP_INTERSECT; }
+			| MAP { $$ = TSMAP_OP_MAP; }
+		;
+
+dictionary_config:
+			dictionary_map_command_expr_paren { $$ = $1; }
+			| dictionary_config dictionary_map_set_expr_operator dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = $2;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_command_expr_paren:
+			'(' dictionary_config ')'	{ $$ = $2; }
+			| dictionary_map_case			{ $$ = $1; }
+			| dictionary_config_comma		{ $$ = $1; }
+		;
+
+dictionary_map_dict:
+			any_name
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_DICTIONARY;
+				n->data = $1;
+				$$ = n;
+			}
+		;
 
 /*****************************************************************************
  *
@@ -15254,6 +15358,7 @@ unreserved_keyword:
 			| LOCK_P
 			| LOCKED
 			| LOGGED
+			| MAP
 			| MAPPING
 			| MATCH
 			| MATCHED
@@ -15562,6 +15667,7 @@ reserved_keyword:
 			| INITIALLY
 			| INTERSECT
 			| INTO
+			| KEEP
 			| LATERAL_P
 			| LEADING
 			| LIMIT
diff --git a/src/backend/tsearch/Makefile b/src/backend/tsearch/Makefile
index 227468a..e61ad4f 100644
--- a/src/backend/tsearch/Makefile
+++ b/src/backend/tsearch/Makefile
@@ -26,7 +26,7 @@ DICTFILES_PATH=$(addprefix dicts/,$(DICTFILES))
 OBJS = ts_locale.o ts_parse.o wparser.o wparser_def.o dict.o \
 	dict_simple.o dict_synonym.o dict_thesaurus.o \
 	dict_ispell.o regis.o spell.o \
-	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o
+	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o ts_configmap.o
 
 include $(top_srcdir)/src/backend/common.mk
 
diff --git a/src/backend/tsearch/ts_configmap.c b/src/backend/tsearch/ts_configmap.c
new file mode 100644
index 0000000..714f2a8
--- /dev/null
+++ b/src/backend/tsearch/ts_configmap.c
@@ -0,0 +1,1114 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.c
+ *		internal representation of text search configuration and utilities for it
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/tsearch/ts_confimap.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include <ctype.h>
+
+#include "access/heapam.h"
+#include "access/genam.h"
+#include "access/htup_details.h"
+#include "access/sysattr.h"
+#include "catalog/indexing.h"
+#include "catalog/pg_ts_dict.h"
+#include "catalog/pg_namespace.h"
+#include "catalog/namespace.h"
+#include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "utils/fmgroids.h"
+
+/*
+ * Size selected arbitrary, based on assumption that 1024 frames of stack
+ * is enough for parsing of configurations
+ */
+#define JSONB_PARSE_STATE_STACK_SIZE 1024
+
+/*
+ * Used during the parsing of TSMapElement from JSONB into internal
+ * data structures.
+ */
+typedef enum TSMapParseState
+{
+	TSMPS_WAIT_ELEMENT,
+	TSMPS_READ_DICT_OID,
+	TSMPS_READ_COMPLEX_OBJ,
+	TSMPS_READ_EXPRESSION,
+	TSMPS_READ_CASE,
+	TSMPS_READ_OPERATOR,
+	TSMPS_READ_COMMAND,
+	TSMPS_READ_CONDITION,
+	TSMPS_READ_ELSEBRANCH,
+	TSMPS_READ_MATCH,
+	TSMPS_READ_KEEP,
+	TSMPS_READ_LEFT,
+	TSMPS_READ_RIGHT
+} TSMapParseState;
+
+/*
+ * Context used during JSONB parsing to construct a TSMap
+ */
+typedef struct TSMapJsonbParseData
+{
+	TSMapParseState states[JSONB_PARSE_STATE_STACK_SIZE];	/* Stack of states of
+															 * JSONB parsing
+															 * automaton */
+	int			statesIndex;	/* Index of current stack frame */
+	TSMapElement *element;		/* Element that is in construction now */
+} TSMapJsonbParseData;
+
+static JsonbValue *TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState);
+static TSMapElement * JsonbToTSMapElement(JsonbContainer *root);
+
+/*
+ * Print name of the namespace into StringInfo variable result
+ */
+static void
+TSMapPrintNamespace(Oid  namespaceId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_namespace namespace;
+
+	if (false)
+		return;
+
+	maprel = heap_open(NamespaceRelationId, AccessShareLock);
+	mapidx = index_open(NamespaceOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(namespaceId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	namespace = (Form_pg_namespace) GETSTRUCT(maptup);
+	appendStringInfoString(result, namespace->nspname.data);
+	appendStringInfoChar(result, '.');
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print name of the dictionary into StringInfo variable result
+ */
+void
+TSMapPrintDictName(Oid dictId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_ts_dict dict;
+
+	if (false)
+		return;
+maprel = heap_open(TSDictionaryRelationId, AccessShareLock);
+	mapidx = index_open(TSDictionaryOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(dictId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	dict = (Form_pg_ts_dict) GETSTRUCT(maptup);
+	if (!TSDictionaryIsVisible(dictId))
+	{
+		TSMapPrintNamespace(dict->dictnamespace, result);
+	}
+	appendStringInfoString(result, dict->dictname.data);
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print the expression into StringInfo variable result
+ */
+static void
+TSMapPrintExpression(TSMapExpression *expression, StringInfo result)
+{
+
+	Assert(expression->left);
+	if (expression->left->type == TSMAP_EXPRESSION &&
+		expression->left->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, '(');
+	}
+	TSMapPrintElement(expression->left, result);
+	if (expression->left->type == TSMAP_EXPRESSION &&
+		expression->left->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, ')');
+	}
+
+	switch (expression->operator)
+	{
+		case TSMAP_OP_UNION:
+			appendStringInfoString(result, " UNION ");
+			break;
+		case TSMAP_OP_EXCEPT:
+			appendStringInfoString(result, " EXCEPT ");
+			break;
+		case TSMAP_OP_INTERSECT:
+			appendStringInfoString(result, " INTERSECT ");
+			break;
+		case TSMAP_OP_COMMA:
+			appendStringInfoString(result, ", ");
+			break;
+		case TSMAP_OP_MAP:
+			appendStringInfoString(result, " MAP ");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains invalid expression operator.")));
+			break;
+	}
+
+	Assert(expression->right);
+	if (expression->right->type == TSMAP_EXPRESSION &&
+		expression->right->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, '(');
+	}
+	TSMapPrintElement(expression->right, result);
+	if (expression->right->type == TSMAP_EXPRESSION &&
+		expression->right->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, ')');
+	}
+}
+
+/*
+ * Print the case configuration construction into StringInfo variable result
+ */
+static void
+TSMapPrintCase(TSMapCase *caseObject, StringInfo result)
+{
+	appendStringInfoString(result, "CASE ");
+
+	TSMapPrintElement(caseObject->condition, result);
+
+	appendStringInfoString(result, " WHEN ");
+	if (!caseObject->match)
+		appendStringInfoString(result, "NO ");
+	appendStringInfoString(result, "MATCH THEN ");
+
+	TSMapPrintElement(caseObject->command, result);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		appendStringInfoString(result, "\nELSE ");
+		TSMapPrintElement(caseObject->elsebranch, result);
+	}
+	appendStringInfoString(result, "\nEND");
+}
+
+/*
+ * Print the element into StringInfo result.
+ * Uses other function and serves for element type detection.
+ */
+void
+TSMapPrintElement(TSMapElement *element, StringInfo result)
+{
+	switch (element->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapPrintExpression(element->value.objectExpression, result);
+			break;
+		case TSMAP_DICTIONARY:
+			TSMapPrintDictName(element->value.objectDictionary, result);
+			break;
+		case TSMAP_CASE:
+			TSMapPrintCase(element->value.objectCase, result);
+			break;
+		case TSMAP_KEEP:
+			appendStringInfoString(result, "KEEP");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains elements with invalid type.")));
+			break;
+	}
+}
+
+/*
+ * Print the text search configuration as a text.
+ */
+Datum
+dictionary_mapping_to_text(PG_FUNCTION_ARGS)
+{
+	Oid			cfgOid = PG_GETARG_OID(0);
+	int32		tokentype = PG_GETARG_INT32(1);
+	StringInfo	rawResult;
+	text	   *result = NULL;
+	TSConfigCacheEntry *cacheEntry;
+
+	cacheEntry = lookup_ts_config_cache(cfgOid);
+	rawResult = makeStringInfo();
+	initStringInfo(rawResult);
+
+	if (cacheEntry->lenmap > tokentype && cacheEntry->map[tokentype] != NULL)
+	{
+		TSMapElement *element = cacheEntry->map[tokentype];
+
+		TSMapPrintElement(element, rawResult);
+	}
+
+	result = cstring_to_text(rawResult->data);
+	pfree(rawResult);
+	PG_RETURN_TEXT_P(result);
+}
+
+/* ----------------
+ * Functions used to convert TSMap structure into JSONB representation
+ * ----------------
+ */
+
+/*
+ * Convert an integer value into JsonbValue
+ */
+static JsonbValue *
+IntToJsonbValue(int intValue)
+{
+	char		buffer[16];
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	/*
+	 * String size is based on limit of int capacity up to 12 chars with sign
+	 * and NULL-character
+	 */
+	memset(buffer, 0, sizeof(char) * 12);
+
+	pg_ltoa(intValue, buffer);
+	value->type = jbvNumeric;
+	value->val.numeric = DatumGetNumeric(DirectFunctionCall3(numeric_in,
+															 CStringGetDatum(buffer),
+															 ObjectIdGetDatum(InvalidOid),
+															 Int32GetDatum(-1)
+															 ));
+	return value;
+}
+
+/*
+ * Convert a FTS configuration expression into JsonbValue
+ */
+static JsonbValue *
+TSMapExpressionToJsonbValue(TSMapExpression *expression, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("operator");
+	key.val.string.val = "operator";
+	value = IntToJsonbValue(expression->operator);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("left");
+	key.val.string.val = "left";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->left, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("right");
+	key.val.string.val = "right";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->right, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS configuration case into JsonbValue
+ */
+static JsonbValue *
+TSMapCaseToJsonbValue(TSMapCase *caseObject, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("condition");
+	key.val.string.val = "condition";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->condition, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("command");
+	key.val.string.val = "command";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->command, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		key.type = jbvString;
+		key.val.string.len = strlen("elsebranch");
+		key.val.string.val = "elsebranch";
+
+		pushJsonbValue(&jsonbState, WJB_KEY, &key);
+		value = TSMapElementToJsonbValue(caseObject->elsebranch, jsonbState);
+
+		if (value && IsAJsonbScalar(value))
+			pushJsonbValue(&jsonbState, WJB_VALUE, value);
+	}
+
+	key.type = jbvString;
+	key.val.string.len = strlen("match");
+	key.val.string.val = "match";
+
+	value = IntToJsonbValue(caseObject->match ? 1 : 0);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS KEEP command into JsonbValue
+ */
+static JsonbValue *
+TSMapKeepToJsonbValue(JsonbParseState *jsonbState)
+{
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	value->type = jbvString;
+	value->val.string.len = strlen("keep");
+	value->val.string.val = "keep";
+
+	return pushJsonbValue(&jsonbState, WJB_VALUE, value);
+}
+
+/*
+ * Convert a FTS element into JsonbValue. Common point for all types of TSMapElement
+ */
+JsonbValue *
+TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState)
+{
+	JsonbValue *result = NULL;
+
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_EXPRESSION:
+				result = TSMapExpressionToJsonbValue(element->value.objectExpression, jsonbState);
+				break;
+			case TSMAP_DICTIONARY:
+				result = IntToJsonbValue(element->value.objectDictionary);
+				break;
+			case TSMAP_CASE:
+				result = TSMapCaseToJsonbValue(element->value.objectCase, jsonbState);
+				break;
+			case TSMAP_KEEP:
+				result = TSMapKeepToJsonbValue(jsonbState);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Required text search configuration contains elements with invalid type.")));
+				break;
+		}
+	}
+	return result;
+}
+
+/*
+ * Convert a FTS configuration into JSONB
+ */
+Jsonb *
+TSMapToJsonb(TSMapElement *element)
+{
+	JsonbParseState *jsonbState = NULL;
+	JsonbValue *out;
+	Jsonb	   *result;
+
+	out = TSMapElementToJsonbValue(element, jsonbState);
+
+	result = JsonbValueToJsonb(out);
+	return result;
+}
+
+/* ----------------
+ * Functions used to get TSMap structure from JSONB representation
+ * ----------------
+ */
+
+/*
+ * Extract an integer from JsonbValue
+ */
+static int
+JsonbValueToInt(JsonbValue *value)
+{
+	char	   *str;
+
+	str = DatumGetCString(DirectFunctionCall1(numeric_out, NumericGetDatum(value->val.numeric)));
+	return pg_atoi(str, sizeof(int), 0);
+}
+
+/*
+ * Check is a key one of FTS configuration case fields
+ */
+static bool
+IsTSMapCaseKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "match") == 0 || strcmp(key, "condition") == 0 || strcmp(key, "command") == 0 || strcmp(key, "elsebranch") == 0;
+}
+
+/*
+ * Check is a key one of FTS configuration expression fields
+ */
+static bool
+IsTSMapExpressionKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "operator") == 0 || strcmp(key, "left") == 0 || strcmp(key, "right") == 0;
+}
+
+/*
+ * Configure parseData->element according to value (key)
+ */
+static void
+JsonbBeginObjectKey(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *parentElement = parseData->element;
+
+	parseData->element = palloc0(sizeof(TSMapElement));
+	parseData->element->parent = parentElement;
+
+	/* Overwrite object-type state based on key */
+	if (IsTSMapExpressionKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_EXPRESSION;
+		parseData->element->type = TSMAP_EXPRESSION;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapExpression));
+	}
+	else if (IsTSMapCaseKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CASE;
+		parseData->element->type = TSMAP_CASE;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapCase));
+	}
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration expression
+ */
+static void
+JsonbKeyExpressionProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "operator") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_OPERATOR;
+	else if (strcmp(key, "left") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_LEFT;
+	else if (strcmp(key, "right") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_RIGHT;
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration case
+ */
+static void
+JsonbKeyCaseProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "condition") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CONDITION;
+	else if (strcmp(key, "command") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_COMMAND;
+	else if (strcmp(key, "elsebranch") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_ELSEBRANCH;
+	else if (strcmp(key, "match") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_MATCH;
+}
+
+/*
+ * Convert a JsonbValue into OID TSMapElement
+ */
+static TSMapElement *
+JsonbValueToOidElement(JsonbValue *value, TSMapElement *parent)
+{
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	element->type = TSMAP_DICTIONARY;
+	element->value.objectDictionary = JsonbValueToInt(value);
+	return element;
+}
+
+/*
+ * Convert a JsonbValue into string TSMapElement.
+ * Used for special values such as KEEP command
+ */
+static TSMapElement *
+JsonbValueReadString(JsonbValue *value, TSMapElement *parent)
+{
+	char	   *str;
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	str = palloc0(sizeof(char) * (value->val.string.len + 1));
+	memcpy(str, value->val.string.val, sizeof(char) * value->val.string.len);
+
+	if (strcmp(str, "keep") == 0)
+		element->type = TSMAP_KEEP;
+
+	pfree(str);
+
+	return element;
+}
+
+/*
+ * Process a JsonbValue object
+ */
+static void
+JsonbProcessElement(JsonbIteratorToken r, JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *element = NULL;
+
+	switch (r)
+	{
+		case WJB_KEY:
+
+			/*
+			 * Construct an TSMapElement object. At first key inside JSONB
+			 * object a type is selected based on key.
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMPLEX_OBJ)
+				JsonbBeginObjectKey(value, parseData);
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_EXPRESSION)
+				JsonbKeyExpressionProcessing(value, parseData);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CASE)
+				JsonbKeyCaseProcessing(value, parseData);
+
+			break;
+		case WJB_BEGIN_OBJECT:
+
+			/*
+			 * Begin construction of new object
+			 */
+			parseData->statesIndex++;
+			parseData->states[parseData->statesIndex] = TSMPS_READ_COMPLEX_OBJ;
+			break;
+		case WJB_END_OBJECT:
+
+			/*
+			 * Save constructed object based on current state of parser
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->parent->value.objectExpression->left = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->parent->value.objectExpression->right = parseData->element;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->parent->value.objectCase->condition = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->parent->value.objectCase->command = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->parent->value.objectCase->elsebranch = parseData->element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_VALUE:
+
+			/*
+			 * Save a value inside constructing object
+			 */
+			if (value.type == jbvBinary)
+				element = JsonbToTSMapElement(value.val.binary.data);
+			else if (value.type == jbvString)
+				element = JsonbValueReadString(&value, parseData->element);
+			else if (value.type == jbvNumeric)
+				element = JsonbValueToOidElement(&value, parseData->element);
+			else
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains object with invalid type.")));
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->value.objectCase->condition = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->value.objectCase->command = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->value.objectCase->elsebranch = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_MATCH)
+				parseData->element->value.objectCase->match = JsonbValueToInt(&value) == 1 ? true : false;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_OPERATOR)
+				parseData->element->value.objectExpression->operator = JsonbValueToInt(&value);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->value.objectExpression->left = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->value.objectExpression->right = element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_ELEM:
+
+			/*
+			 * Store a simple element such as dictionary OID
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_WAIT_ELEMENT)
+			{
+				if (parseData->element != NULL)
+					parseData->element = JsonbValueToOidElement(&value, parseData->element->parent);
+				else
+					parseData->element = JsonbValueToOidElement(&value, NULL);
+			}
+			break;
+		default:
+			/* Ignore unused JSONB tokens */
+			break;
+	}
+}
+
+/*
+ * Convert a JsonbContainer into TSMapElement
+ */
+static TSMapElement *
+JsonbToTSMapElement(JsonbContainer *root)
+{
+	TSMapJsonbParseData parseData;
+	JsonbIteratorToken r;
+	JsonbIterator *it;
+	JsonbValue	val;
+
+	parseData.statesIndex = 0;
+	parseData.states[parseData.statesIndex] = TSMPS_WAIT_ELEMENT;
+	parseData.element = NULL;
+
+	it = JsonbIteratorInit(root);
+
+	while ((r = JsonbIteratorNext(&it, &val, true)) != WJB_DONE)
+		JsonbProcessElement(r, val, &parseData);
+
+	return parseData.element;
+}
+
+/*
+ * Convert a JSONB into TSMapElement
+ */
+TSMapElement *
+JsonbToTSMap(Jsonb *json)
+{
+	JsonbContainer *root = &json->root;
+
+	return JsonbToTSMapElement(root);
+}
+
+/* ----------------
+ * Text Search Configuration Map Utils
+ * ----------------
+ */
+
+/*
+ * Dynamically extendable list of OIDs
+ */
+typedef struct OidList
+{
+	Oid		   *data;
+	int			size;			/* Size of data array. Uninitialized elements
+								 * in data filled with InvalidOid */
+} OidList;
+
+/*
+ * Initialize a list
+ */
+static OidList *
+OidListInit()
+{
+	OidList    *result = palloc0(sizeof(OidList));
+
+	result->size = 1;
+	result->data = palloc0(result->size * sizeof(Oid));
+	result->data[0] = InvalidOid;
+	return result;
+}
+
+/*
+ * Add a new OID into list. If it is already stored in list, it won't be add second time.
+ */
+static void
+OidListAdd(OidList *list, Oid oid)
+{
+	int			i;
+
+	/* Search for the Oid in the list */
+	for (i = 0; list->data[i] != InvalidOid; i++)
+		if (list->data[i] == oid)
+			return;
+
+	/* If not found, insert it in the end of the list */
+	if (i >= list->size - 1)
+	{
+		int			j;
+
+		list->size = list->size * 2;
+		list->data = repalloc(list->data, sizeof(Oid) * list->size);
+
+		for (j = i; j < list->size; j++)
+			list->data[j] = InvalidOid;
+	}
+	list->data[i] = oid;
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement.
+ * Used for internal recursive calls.
+ */
+static void
+TSMapGetDictionariesInternal(TSMapElement *config, OidList *list)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapGetDictionariesInternal(config->value.objectExpression->left, list);
+			TSMapGetDictionariesInternal(config->value.objectExpression->right, list);
+			break;
+		case TSMAP_CASE:
+			TSMapGetDictionariesInternal(config->value.objectCase->command, list);
+			TSMapGetDictionariesInternal(config->value.objectCase->condition, list);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapGetDictionariesInternal(config->value.objectCase->elsebranch, list);
+			break;
+		case TSMAP_DICTIONARY:
+			OidListAdd(list, config->value.objectDictionary);
+			break;
+	}
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement
+ */
+Oid *
+TSMapGetDictionaries(TSMapElement *config)
+{
+	Oid		   *result;
+	OidList    *list = OidListInit();
+
+	TSMapGetDictionariesInternal(config, list);
+
+	result = list->data;
+	pfree(list);
+
+	return result;
+}
+
+/*
+ * Replace one dictionary OID with another in all instances inside a configuration
+ */
+void
+TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapReplaceDictionary(config->value.objectExpression->left, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectExpression->right, oldDict, newDict);
+			break;
+		case TSMAP_CASE:
+			TSMapReplaceDictionary(config->value.objectCase->command, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectCase->condition, oldDict, newDict);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapReplaceDictionary(config->value.objectCase->elsebranch, oldDict, newDict);
+			break;
+		case TSMAP_DICTIONARY:
+			if (config->value.objectDictionary == oldDict)
+				config->value.objectDictionary = newDict;
+			break;
+	}
+}
+
+/* ----------------
+ * Text Search Configuration Map Memory Management
+ * ----------------
+ */
+
+/*
+ * Move a FTS configuration expression to another memory context
+ */
+static TSMapElement *
+TSMapExpressionMoveToMemoryContext(TSMapExpression *expression, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapExpression *resultExpression = MemoryContextAlloc(context, sizeof(TSMapExpression));
+
+	memset(resultExpression, 0, sizeof(TSMapExpression));
+	result->value.objectExpression = resultExpression;
+	result->type = TSMAP_EXPRESSION;
+
+	resultExpression->operator = expression->operator;
+
+	resultExpression->left = TSMapMoveToMemoryContext(expression->left, context);
+	resultExpression->left->parent = result;
+
+	resultExpression->right = TSMapMoveToMemoryContext(expression->right, context);
+	resultExpression->right->parent = result;
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration case to another memory context
+ */
+static TSMapElement *
+TSMapCaseMoveToMemoryContext(TSMapCase *caseObject, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapCase  *resultCaseObject = MemoryContextAlloc(context, sizeof(TSMapCase));
+
+	memset(resultCaseObject, 0, sizeof(TSMapCase));
+	result->value.objectCase = resultCaseObject;
+	result->type = TSMAP_CASE;
+
+	resultCaseObject->match = caseObject->match;
+
+	resultCaseObject->command = TSMapMoveToMemoryContext(caseObject->command, context);
+	resultCaseObject->command->parent = result;
+
+	resultCaseObject->condition = TSMapMoveToMemoryContext(caseObject->condition, context);
+	resultCaseObject->condition->parent = result;
+
+	if (caseObject->elsebranch != NULL)
+	{
+		resultCaseObject->elsebranch = TSMapMoveToMemoryContext(caseObject->elsebranch, context);
+		resultCaseObject->elsebranch->parent = result;
+	}
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration to another memory context
+ */
+TSMapElement *
+TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context)
+{
+	TSMapElement *result = NULL;
+
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			result = TSMapExpressionMoveToMemoryContext(config->value.objectExpression, context);
+			break;
+		case TSMAP_CASE:
+			result = TSMapCaseMoveToMemoryContext(config->value.objectCase, context);
+			break;
+		case TSMAP_DICTIONARY:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_DICTIONARY;
+			result->value.objectDictionary = config->value.objectDictionary;
+			break;
+		case TSMAP_KEEP:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_KEEP;
+			result->value.object = NULL;
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains object with invalid type.")));
+			break;
+	}
+
+	return result;
+}
+
+/*
+ * Free memory occupied by FTS configuration expression
+ */
+static void
+TSMapExpressionFree(TSMapExpression *expression)
+{
+	if (expression->left)
+		TSMapElementFree(expression->left);
+	if (expression->right)
+		TSMapElementFree(expression->right);
+	pfree(expression);
+}
+
+/*
+ * Free memory occupied by FTS configuration case
+ */
+static void
+TSMapCaseFree(TSMapCase *caseObject)
+{
+	TSMapElementFree(caseObject->condition);
+	TSMapElementFree(caseObject->command);
+	TSMapElementFree(caseObject->elsebranch);
+	pfree(caseObject);
+}
+
+/*
+ * Free memory occupied by FTS configuration element
+ */
+void
+TSMapElementFree(TSMapElement *element)
+{
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_CASE:
+				TSMapCaseFree(element->value.objectCase);
+				break;
+			case TSMAP_EXPRESSION:
+				TSMapExpressionFree(element->value.objectExpression);
+				break;
+		}
+		pfree(element);
+	}
+}
+
+/*
+ * Do a deep comparison of two TSMapElements. Doesn't check parents of elements
+ */
+bool
+TSMapElementEquals(TSMapElement *a, TSMapElement *b)
+{
+	bool		result = true;
+
+	if (a->type == b->type)
+	{
+		switch (a->type)
+		{
+			case TSMAP_CASE:
+				if (!TSMapElementEquals(a->value.objectCase->condition, b->value.objectCase->condition))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectCase->command, b->value.objectCase->command))
+					result = false;
+
+				if (a->value.objectCase->elsebranch != NULL && b->value.objectCase->elsebranch != NULL)
+				{
+					if (!TSMapElementEquals(a->value.objectCase->elsebranch, b->value.objectCase->elsebranch))
+						result = false;
+				}
+				else if (a->value.objectCase->elsebranch != NULL || b->value.objectCase->elsebranch != NULL)
+					result = false;
+
+				if (a->value.objectCase->match != b->value.objectCase->match)
+					result = false;
+				break;
+			case TSMAP_EXPRESSION:
+				if (!TSMapElementEquals(a->value.objectExpression->left, b->value.objectExpression->left))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectExpression->right, b->value.objectExpression->right))
+					result = false;
+				if (a->value.objectExpression->operator != b->value.objectExpression->operator)
+					result = false;
+				break;
+			case TSMAP_DICTIONARY:
+				result = a->value.objectDictionary == b->value.objectDictionary;
+				break;
+			case TSMAP_KEEP:
+				result = true;
+		}
+	}
+	else
+		result = false;
+
+	return result;
+}
diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c
index 7b69ef5..f476abb 100644
--- a/src/backend/tsearch/ts_parse.c
+++ b/src/backend/tsearch/ts_parse.c
@@ -16,58 +16,157 @@
 
 #include "tsearch/ts_cache.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "funcapi.h"
 
 #define IGNORE_LONGLEXEME	1
 
-/*
+/*-------------------
  * Lexize subsystem
+ *-------------------
  */
 
+/*
+ * Representation of token produced by FTS parser. It contains intermediate
+ * lexemes in case of phrase dictionary processing.
+ */
 typedef struct ParsedLex
 {
-	int			type;
-	char	   *lemm;
-	int			lenlemm;
-	struct ParsedLex *next;
+	int			type;			/* Token type */
+	char	   *lemm;			/* Token itself */
+	int			lenlemm;		/* Length of the token string */
+	int			maplen;			/* Length of the map */
+	bool	   *accepted;		/* Is accepted by some dictionary */
+	bool	   *rejected;		/* Is rejected by all dictionaries */
+	bool	   *notFinished;	/* Some dictionary not finished processing and
+								 * waits for more tokens */
+	struct ParsedLex *next;		/* Next token in the list */
+	TSMapElement *relatedRule;	/* Rule which is used to produce lexemes from
+								 * the token */
 } ParsedLex;
 
+/*
+ * List of tokens produced by FTS parser.
+ */
 typedef struct ListParsedLex
 {
 	ParsedLex  *head;
 	ParsedLex  *tail;
 } ListParsedLex;
 
-typedef struct
+/*
+ * Dictionary state shared between processing of different tokens
+ */
+typedef struct DictState
 {
-	TSConfigCacheEntry *cfg;
-	Oid			curDictId;
-	int			posDict;
-	DictSubState dictState;
-	ParsedLex  *curSub;
-	ListParsedLex towork;		/* current list to work */
-	ListParsedLex waste;		/* list of lexemes that already lexized */
+	Oid			relatedDictionary;	/* DictState contains state of dictionary
+									 * with this Oid */
+	DictSubState subState;		/* Internal state of the dictionary used to
+								 * store some state between dictionary calls */
+	ListParsedLex acceptedTokens;	/* Tokens which are processed and
+									 * accepted, used in last returned result
+									 * by the dictionary */
+	ListParsedLex intermediateTokens;	/* Tokens which are not accepted, but
+										 * were processed by thesaurus-like
+										 * dictionary */
+	bool		storeToAccepted;	/* Should current token be appended to
+									 * accepted or intermediate tokens */
+	bool		processed;		/* Is the dictionary take control during
+								 * current token processing */
+	TSLexeme   *tmpResult;		/* Last result returned by thesaurus-like
+								 * dictionary, if dictionary still waiting for
+								 * more lexemes */
+} DictState;
 
-	/*
-	 * fields to store last variant to lexize (basically, thesaurus or similar
-	 * to, which wants	several lexemes
-	 */
+/*
+ * List of dictionary states
+ */
+typedef struct DictStateList
+{
+	int			listLength;
+	DictState  *states;
+} DictStateList;
 
-	ParsedLex  *lastRes;
-	TSLexeme   *tmpRes;
+/*
+ * Buffer entry with lexemes produced from current token
+ */
+typedef struct LexemesBufferEntry
+{
+	TSMapElement *key;	/* Element of the mapping configuration produced the entry */
+	ParsedLex  *token;	/* Token used for production of the lexemes */
+	TSLexeme   *data;	/* Lexemes produced from current token */
+} LexemesBufferEntry;
+
+/*
+ * Buffer with lexemes produced from current token
+ */
+typedef struct LexemesBuffer
+{
+	int			size;
+	LexemesBufferEntry *data;
+} LexemesBuffer;
+
+/*
+ * Storage for accepted and possible accepted lexemes
+ */
+typedef struct ResultStorage
+{
+	TSLexeme   *lexemes;		/* Processed lexemes, which is not yet
+								 * accepted */
+	TSLexeme   *accepted;		/* Already accepted lexemes */
+} ResultStorage;
+
+/*
+ * FTS processing context
+ */
+typedef struct LexizeData
+{
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	DictStateList dslist;		/* List of all currently stored states of
+								 * dictionaries */
+	ListParsedLex towork;		/* Current list to work */
+	ListParsedLex waste;		/* List of lexemes that already lexized */
+	LexemesBuffer buffer;		/* Buffer of processed lexemes. Used to avoid
+								 * multiple execution of token lexize process
+								 * with same parameters */
+	ResultStorage delayedResults;	/* Results that should be returned but may
+									 * be rejected in future */
+	Oid			skipDictionary; /* The dictionary we should skip during
+								 * processing. Used to avoid infinite loop in
+								 * configuration with phrase dictionary */
+	bool		debugContext;	/* If true, relatedRule attribute is filled */
 } LexizeData;
 
-static void
-LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+/*
+ * FTS processing debug context. Used during ts_debug calls.
+ */
+typedef struct TSDebugContext
 {
-	ld->cfg = cfg;
-	ld->curDictId = InvalidOid;
-	ld->posDict = 0;
-	ld->towork.head = ld->towork.tail = ld->curSub = NULL;
-	ld->waste.head = ld->waste.tail = NULL;
-	ld->lastRes = NULL;
-	ld->tmpRes = NULL;
-}
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	TSParserCacheEntry *prsobj; /* Parser context of current ts_debug context */
+	LexDescr   *tokenTypes;		/* Token types supported by current parser */
+	void	   *prsdata;		/* Parser data of current ts_debug context */
+	LexizeData	ldata;			/* Lexize data of current ts_debug context */
+	int			tokentype;		/* Last token tokentype */
+	TSLexeme   *savedLexemes;	/* Last token lexemes stored for ts_debug
+								 * output */
+	ParsedLex  *leftTokens;		/* Corresponded ParsedLex */
+} TSDebugContext;
+
+static TSLexeme *TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression);
+static TSLexeme *LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config);
+
+/*-------------------
+ * ListParsedLex API
+ *-------------------
+ */
 
+/*
+ * Add a ParsedLex to the end of the list
+ */
 static void
 LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 {
@@ -81,274 +180,1291 @@ LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 	newpl->next = NULL;
 }
 
-static ParsedLex *
-LPLRemoveHead(ListParsedLex *list)
-{
-	ParsedLex  *res = list->head;
+/*
+ * Add a copy of ParsedLex to the end of the list
+ */
+static void
+LPLAddTailCopy(ListParsedLex *list, ParsedLex *newpl)
+{
+	ParsedLex  *copy = palloc0(sizeof(ParsedLex));
+
+	copy->lenlemm = newpl->lenlemm;
+	copy->type = newpl->type;
+	copy->lemm = newpl->lemm;
+	copy->relatedRule = newpl->relatedRule;
+	copy->next = NULL;
+
+	if (list->tail)
+	{
+		list->tail->next = copy;
+		list->tail = copy;
+	}
+	else
+		list->head = list->tail = copy;
+}
+
+/*
+ * Remove the head of the list. Return pointer to detached head
+ */
+static ParsedLex *
+LPLRemoveHead(ListParsedLex *list)
+{
+	ParsedLex  *res = list->head;
+
+	if (list->head)
+		list->head = list->head->next;
+
+	if (list->head == NULL)
+		list->tail = NULL;
+
+	return res;
+}
+
+/*
+ * Remove all ParsedLex from the list
+ */
+static void
+LPLClear(ListParsedLex *list)
+{
+	ParsedLex  *tmp,
+			   *ptr = list->head;
+
+	while (ptr)
+	{
+		tmp = ptr->next;
+		pfree(ptr);
+		ptr = tmp;
+	}
+
+	list->head = list->tail = NULL;
+}
+
+/*-------------------
+ * LexizeData manipulation functions
+ *-------------------
+ */
+
+/*
+ * Initialize empty LexizeData object
+ */
+static void
+LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+{
+	ld->cfg = cfg;
+	ld->skipDictionary = InvalidOid;
+	ld->towork.head = ld->towork.tail = NULL;
+	ld->waste.head = ld->waste.tail = NULL;
+	ld->dslist.listLength = 0;
+	ld->dslist.states = NULL;
+	ld->buffer.size = 0;
+	ld->buffer.data = NULL;
+	ld->delayedResults.lexemes = NULL;
+	ld->delayedResults.accepted = NULL;
+}
+
+/*
+ * Add a token to the processing queue
+ */
+static void
+LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
+{
+	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+
+	newpl->type = type;
+	newpl->lemm = lemm;
+	newpl->lenlemm = lenlemm;
+	newpl->relatedRule = NULL;
+	LPLAddTail(&ld->towork, newpl);
+}
+
+/*
+ * Remove head of the processing queue
+ */
+static void
+RemoveHead(LexizeData *ld)
+{
+	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+}
+
+/*
+ * Set token corresponded to current lexeme
+ */
+static void
+setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+{
+	if (correspondLexem)
+		*correspondLexem = ld->waste.head;
+	else
+		LPLClear(&ld->waste);
+
+	ld->waste.head = ld->waste.tail = NULL;
+}
+
+/*-------------------
+ * DictState manipulation functions
+ *-------------------
+ */
+
+/*
+ * Get a state of dictionary based on its OID
+ */
+static DictState *
+DictStateListGet(DictStateList *list, Oid dictId)
+{
+	int			i;
+	DictState  *result = NULL;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			result = &list->states[i];
+
+	return result;
+}
+
+/*
+ * Remove a state of dictionary based on its OID
+ */
+static void
+DictStateListRemove(DictStateList *list, Oid dictId)
+{
+	int			i;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			break;
+
+	if (i != list->listLength)
+	{
+		memcpy(list->states + i, list->states + i + 1, sizeof(DictState) * (list->listLength - i - 1));
+		list->listLength--;
+		if (list->listLength == 0)
+			list->states = NULL;
+		else
+			list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	}
+}
+
+/*
+ * Insert a state of dictionary with specified OID
+ */
+static DictState *
+DictStateListAdd(DictStateList *list, DictState *state)
+{
+	DictStateListRemove(list, state->relatedDictionary);
+
+	list->listLength++;
+	if (list->states)
+		list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	else
+		list->states = palloc0(sizeof(DictState) * list->listLength);
+
+	memcpy(list->states + list->listLength - 1, state, sizeof(DictState));
+
+	return list->states + list->listLength - 1;
+}
+
+/*
+ * Remove states of all dictionaries
+ */
+static void
+DictStateListClear(DictStateList *list)
+{
+	list->listLength = 0;
+	if (list->states)
+		pfree(list->states);
+	list->states = NULL;
+}
+
+/*-------------------
+ * LexemesBuffer manipulation functions
+ *-------------------
+ */
+
+/*
+ * Check if there is a saved lexeme generated by specified TSMapElement
+ */
+static bool
+LexemesBufferContains(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			return true;
+
+	return false;
+}
+
+/*
+ * Get a saved lexeme generated by specified TSMapElement
+ */
+static TSLexeme *
+LexemesBufferGet(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+	TSLexeme   *result = NULL;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			result = buffer->data[i].data;
+
+	return result;
+}
+
+/*
+ * Remove a saved lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferRemove(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			break;
+
+	if (i != buffer->size)
+	{
+		memcpy(buffer->data + i, buffer->data + i + 1, sizeof(LexemesBufferEntry) * (buffer->size - i - 1));
+		buffer->size--;
+		if (buffer->size == 0)
+			buffer->data = NULL;
+		else
+			buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	}
+}
+
+/*
+ * Same a lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferAdd(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token, TSLexeme *data)
+{
+	LexemesBufferRemove(buffer, key, token);
+
+	buffer->size++;
+	if (buffer->data)
+		buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	else
+		buffer->data = palloc0(sizeof(LexemesBufferEntry) * buffer->size);
+
+	buffer->data[buffer->size - 1].token = token;
+	buffer->data[buffer->size - 1].key = key;
+	buffer->data[buffer->size - 1].data = data;
+}
+
+/*
+ * Remove all lexemes saved in a buffer
+ */
+static void
+LexemesBufferClear(LexemesBuffer *buffer)
+{
+	int			i;
+	bool	   *skipEntry = palloc0(sizeof(bool) * buffer->size);
+
+	for (i = 0; i < buffer->size; i++)
+	{
+		if (buffer->data[i].data != NULL && !skipEntry[i])
+		{
+			int			j;
+
+			for (j = 0; j < buffer->size; j++)
+				if (buffer->data[i].data == buffer->data[j].data)
+					skipEntry[j] = true;
+
+			pfree(buffer->data[i].data);
+		}
+	}
+
+	buffer->size = 0;
+	if (buffer->data)
+		pfree(buffer->data);
+	buffer->data = NULL;
+}
+
+/*-------------------
+ * TSLexeme util functions
+ *-------------------
+ */
+
+/*
+ * Get size of TSLexeme except empty-lexeme
+ */
+static int
+TSLexemeGetSize(TSLexeme *lex)
+{
+	int			result = 0;
+	TSLexeme   *ptr = lex;
+
+	while (ptr && ptr->lexeme)
+	{
+		result++;
+		ptr++;
+	}
+
+	return result;
+}
+
+/*
+ * Remove repeated lexemes. Also remove copies of whole nvariant groups.
+ */
+static TSLexeme *
+TSLexemeRemoveDuplications(TSLexeme *lexeme)
+{
+	TSLexeme   *res;
+	int			curLexIndex;
+	int			i;
+	int			lexemeSize = TSLexemeGetSize(lexeme);
+	int			shouldCopyCount = lexemeSize;
+	bool	   *shouldCopy;
+
+	if (lexeme == NULL)
+		return NULL;
+
+	shouldCopy = palloc(sizeof(bool) * lexemeSize);
+	memset(shouldCopy, true, sizeof(bool) * lexemeSize);
+
+	for (curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		for (i = curLexIndex + 1; i < lexemeSize; i++)
+		{
+			if (!shouldCopy[i])
+				continue;
+
+			if (strcmp(lexeme[curLexIndex].lexeme, lexeme[i].lexeme) == 0)
+			{
+				if (lexeme[curLexIndex].nvariant == lexeme[i].nvariant)
+				{
+					shouldCopy[i] = false;
+					shouldCopyCount--;
+					continue;
+				}
+				else
+				{
+					/*
+					 * Check for same set of lexemes in another nvariant
+					 * series
+					 */
+					int			nvariantCountL = 0;
+					int			nvariantCountR = 0;
+					int			nvariantOverlap = 1;
+					int			j;
+
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[curLexIndex].nvariant == lexeme[j].nvariant)
+							nvariantCountL++;
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[i].nvariant == lexeme[j].nvariant)
+							nvariantCountR++;
+
+					if (nvariantCountL != nvariantCountR)
+						continue;
+
+					for (j = 1; j < nvariantCountR; j++)
+					{
+						if (strcmp(lexeme[curLexIndex + j].lexeme, lexeme[i + j].lexeme) == 0
+							&& lexeme[curLexIndex + j].nvariant == lexeme[i + j].nvariant)
+							nvariantOverlap++;
+					}
+
+					if (nvariantOverlap != nvariantCountR)
+						continue;
+
+					for (j = 0; j < nvariantCountR; j++)
+						shouldCopy[i + j] = false;
+				}
+			}
+		}
+	}
+
+	res = palloc0(sizeof(TSLexeme) * (shouldCopyCount + 1));
+
+	for (i = 0, curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		if (shouldCopy[curLexIndex])
+		{
+			memcpy(res + i, lexeme + curLexIndex, sizeof(TSLexeme));
+			i++;
+		}
+	}
+
+	pfree(shouldCopy);
+	return res;
+}
+
+/*
+ * Combine two lexeme lists with respect to positions
+ */
+static TSLexeme *
+TSLexemeMergePositions(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+
+	if (left != NULL || right != NULL)
+	{
+		int			left_i = 0;
+		int			right_i = 0;
+		int			left_max_nvariant = 0;
+		int			i;
+		int			left_size = TSLexemeGetSize(left);
+		int			right_size = TSLexemeGetSize(right);
+
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		for (i = 0; i < right_size; i++)
+			right[i].nvariant += left_max_nvariant;
+		if (right && right[0].flags & TSL_ADDPOS)
+			right[0].flags &= ~TSL_ADDPOS;
+
+		i = 0;
+		while (i < left_size + right_size)
+		{
+			if (left_i < left_size)
+			{
+				do
+				{
+					result[i++] = left[left_i++];
+				} while (left && left[left_i].lexeme && (left[left_i].flags & TSL_ADDPOS) == 0);
+			}
+
+			if (right_i < right_size)
+			{
+				do
+				{
+					result[i++] = right[right_i++];
+				} while (right && right[right_i].lexeme && (right[right_i].flags & TSL_ADDPOS) == 0);
+			}
+		}
+	}
+	return result;
+}
+
+/*
+ * Split lexemes generated by regular dictionaries and multi-input dictionaries
+ * and combine them with respect to positions
+ */
+static TSLexeme *
+TSLexemeFilterMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *result;
+	TSLexeme   *ptr = lexemes;
+	int			multi_lexemes = 0;
+
+	while (ptr && ptr->lexeme)
+	{
+		if (ptr->flags & TSL_MULTI)
+			multi_lexemes++;
+		ptr++;
+	}
+
+	if (multi_lexemes > 0)
+	{
+		TSLexeme   *lexemes_multi = palloc0(sizeof(TSLexeme) * (multi_lexemes + 1));
+		TSLexeme   *lexemes_rest = palloc0(sizeof(TSLexeme) * (TSLexemeGetSize(lexemes) - multi_lexemes + 1));
+		int			rest_i = 0;
+		int			multi_i = 0;
+
+		ptr = lexemes;
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr->flags & TSL_MULTI)
+				lexemes_multi[multi_i++] = *ptr;
+			else
+				lexemes_rest[rest_i++] = *ptr;
+
+			ptr++;
+		}
+		result = TSLexemeMergePositions(lexemes_rest, lexemes_multi);
+	}
+	else
+	{
+		result = TSLexemeMergePositions(lexemes, NULL);
+	}
+
+	return result;
+}
+
+/*
+ * Mark lexemes as generated by multi-input (thesaurus-like) dictionary
+ */
+static void
+TSLexemeMarkMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *ptr = lexemes;
+
+	while (ptr && ptr->lexeme)
+	{
+		ptr->flags |= TSL_MULTI;
+		ptr++;
+	}
+}
+
+/*-------------------
+ * Lexemes set operations
+ *-------------------
+ */
+
+/*
+ * Combine left and right lexeme lists into one.
+ * If append is true, right lexemes added after last left lexeme with TSL_ADDPOS flag
+ */
+static TSLexeme *
+TSLexemeUnionOpt(TSLexeme *left, TSLexeme *right, bool append)
+{
+	TSLexeme   *result;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+	int			left_max_nvariant = 0;
+	int			i;
+
+	if (left == NULL && right == NULL)
+	{
+		result = NULL;
+	}
+	else
+	{
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		if (left_size > 0)
+			memcpy(result, left, sizeof(TSLexeme) * left_size);
+		if (right_size > 0)
+			memcpy(result + left_size, right, sizeof(TSLexeme) * right_size);
+		if (append && left_size > 0 && right_size > 0)
+			result[left_size].flags |= TSL_ADDPOS;
+
+		for (i = left_size; i < left_size + right_size; i++)
+			result[i].nvariant += left_max_nvariant;
+	}
+
+	return result;
+}
+
+/*
+ * Combine left and right lexeme lists into one
+ */
+static TSLexeme *
+TSLexemeUnion(TSLexeme *left, TSLexeme *right)
+{
+	return TSLexemeUnionOpt(left, right, false);
+}
+
+/*
+ * Remove common lexemes and return only which is stored in left list
+ */
+static TSLexeme *
+TSLexemeExcept(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (!found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*
+ * Keep only common lexemes
+ */
+static TSLexeme *
+TSLexemeIntersect(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*-------------------
+ * Result storage functions
+ *-------------------
+ */
+
+/*
+ * Add a lexeme to the result storage
+ */
+static void
+ResultStorageAdd(ResultStorage *storage, ParsedLex *token, TSLexeme *lexs)
+{
+	TSLexeme   *oldLexs = storage->lexemes;
+
+	storage->lexemes = TSLexemeUnionOpt(storage->lexemes, lexs, true);
+	if (oldLexs)
+		pfree(oldLexs);
+}
+
+/*
+ * Move all saved lexemes to accepted list
+ */
+static void
+ResultStorageMoveToAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+	{
+		TSLexeme   *prevAccepted = storage->accepted;
+
+		storage->accepted = TSLexemeUnionOpt(storage->accepted, storage->lexemes, true);
+		if (prevAccepted)
+			pfree(prevAccepted);
+		if (storage->lexemes)
+			pfree(storage->lexemes);
+	}
+	else
+	{
+		storage->accepted = storage->lexemes;
+	}
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all non-accepted lexemes
+ */
+static void
+ResultStorageClearLexemes(ResultStorage *storage)
+{
+	if (storage->lexemes)
+		pfree(storage->lexemes);
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all accepted lexemes
+ */
+static void
+ResultStorageClearAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+		pfree(storage->accepted);
+	storage->accepted = NULL;
+}
+
+/*-------------------
+ * Condition and command execution
+ *-------------------
+ */
+
+/*
+ * Process a token by the dictionary
+ */
+static TSLexeme *
+LexizeExecDictionary(LexizeData *ld, ParsedLex *token, TSMapElement *dictionary)
+{
+	TSLexeme   *res;
+	TSDictionaryCacheEntry *dict;
+	DictSubState subState;
+	Oid			dictId = dictionary->value.objectDictionary;
+
+	if (ld->skipDictionary == dictId)
+		return NULL;
+
+	if (LexemesBufferContains(&ld->buffer, dictionary, token))
+		res = LexemesBufferGet(&ld->buffer, dictionary, token);
+	else
+	{
+		char	   *curValLemm = token->lemm;
+		int			curValLenLemm = token->lenlemm;
+		DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+		dict = lookup_ts_dictionary_cache(dictId);
+
+		if (state)
+		{
+			subState = state->subState;
+			state->processed = true;
+		}
+		else
+		{
+			subState.isend = subState.getnext = false;
+			subState.private_state = NULL;
+		}
+
+		res = (TSLexeme *) DatumGetPointer(FunctionCall4(&(dict->lexize),
+														 PointerGetDatum(dict->dictData),
+														 PointerGetDatum(curValLemm),
+														 Int32GetDatum(curValLenLemm),
+														 PointerGetDatum(&subState)
+														 ));
+
+		if (subState.getnext)
+		{
+			/*
+			 * Dictionary wants next word, so store current context and state
+			 * in the DictStateList
+			 */
+			if (state == NULL)
+			{
+				state = palloc0(sizeof(DictState));
+				state->processed = true;
+				state->relatedDictionary = dictId;
+				state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				state->acceptedTokens.head = state->acceptedTokens.tail = NULL;
+				state->tmpResult = NULL;
+
+				/*
+				 * Add state to the list and update pointer in order to work
+				 * with copy from the list
+				 */
+				state = DictStateListAdd(&ld->dslist, state);
+			}
+
+			state->subState = subState;
+			state->storeToAccepted = res != NULL;
+
+			if (res)
+			{
+				if (state->intermediateTokens.head != NULL)
+				{
+					ParsedLex  *ptr = state->intermediateTokens.head;
+
+					while (ptr)
+					{
+						LPLAddTailCopy(&state->acceptedTokens, ptr);
+						ptr = ptr->next;
+					}
+					state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				}
+
+				if (state->tmpResult)
+					pfree(state->tmpResult);
+				TSLexemeMarkMulti(res);
+				state->tmpResult = res;
+				res = NULL;
+			}
+		}
+		else if (state != NULL)
+		{
+			if (res)
+			{
+				if (state)
+					TSLexemeMarkMulti(res);
+				DictStateListRemove(&ld->dslist, dictId);
+			}
+			else
+			{
+				/*
+				 * Trigger post-processing in order to check tmpResult and
+				 * restart processing (see LexizeExec function)
+				 */
+				state->processed = false;
+			}
+		}
+		LexemesBufferAdd(&ld->buffer, dictionary, token, res);
+	}
+
+	return res;
+}
+
+/*
+ * Check is dictionary waits for more tokens or not
+ */
+static bool
+LexizeExecDictionaryWaitNext(LexizeData *ld, Oid dictId)
+{
+	DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+	if (state)
+		return state->subState.getnext;
+	else
+		return false;
+}
+
+/*
+ * Check is dictionary result for current token is NULL or not.
+ * It dictionary waits for more lexemes, the result is interpreted as not null.
+ */
+static bool
+LexizeExecIsNull(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	bool		result = false;
+
+	if (config->type == TSMAP_EXPRESSION)
+	{
+		TSMapExpression *expression = config->value.objectExpression;
+
+		result = LexizeExecIsNull(ld, token, expression->left) || LexizeExecIsNull(ld, token, expression->right);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		Oid			dictOid = config->value.objectDictionary;
+		TSLexeme   *lexemes = LexizeExecDictionary(ld, token, config);
+
+		if (lexemes)
+			result = false;
+		else
+			result = !LexizeExecDictionaryWaitNext(ld, dictOid);
+	}
+	return result;
+}
+
+/*
+ * Execute a MAP operator
+ */
+static TSLexeme *
+TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression)
+{
+	TSLexeme   *left_res;
+	TSLexeme   *result = NULL;
+	int			left_size;
+	int			i;
+
+	left_res = LexizeExecTSElement(ld, token, expression->left);
+	left_size = TSLexemeGetSize(left_res);
+
+	if (left_res == NULL && LexizeExecIsNull(ld, token, expression->left))
+		result = LexizeExecTSElement(ld, token, expression->right);
+	else if (expression->operator == TSMAP_OP_COMMA &&
+			((left_res != NULL && (left_res->flags & TSL_FILTER) == 0) || left_res == NULL))
+		result = left_res;
+	else
+	{
+		TSMapElement *relatedRuleTmp = NULL;
+		relatedRuleTmp = palloc0(sizeof(TSMapElement));
+		relatedRuleTmp->parent = NULL;
+		relatedRuleTmp->type = TSMAP_EXPRESSION;
+		relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+		relatedRuleTmp->value.objectExpression->operator = expression->operator;
+		relatedRuleTmp->value.objectExpression->left = token->relatedRule;
+
+		for (i = 0; i < left_size; i++)
+		{
+			TSLexeme   *tmp_res = NULL;
+			TSLexeme   *prev_res;
+			ParsedLex	tmp_token;
+
+			tmp_token.lemm = left_res[i].lexeme;
+			tmp_token.lenlemm = strlen(left_res[i].lexeme);
+			tmp_token.type = token->type;
+			tmp_token.next = NULL;
+
+			tmp_res = LexizeExecTSElement(ld, &tmp_token, expression->right);
+			relatedRuleTmp->value.objectExpression->right = tmp_token.relatedRule;
+			prev_res = result;
+			result = TSLexemeUnion(prev_res, tmp_res);
+			if (prev_res)
+				pfree(prev_res);
+		}
+		token->relatedRule = relatedRuleTmp;
+	}
+
+	return result;
+}
+
+/*
+ * Execute a TSMapElement
+ * Common point of all possible types of TSMapElement
+ */
+static TSLexeme *
+LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	TSLexeme   *result = NULL;
+
+	if (LexemesBufferContains(&ld->buffer, config, token))
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexemesBufferGet(&ld->buffer, config, token);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexizeExecDictionary(ld, token, config);
+	}
+	else if (config->type == TSMAP_CASE)
+	{
+		TSMapCase  *caseObject = config->value.objectCase;
+		bool		conditionIsNull = LexizeExecIsNull(ld, token, caseObject->condition);
+
+		if ((!conditionIsNull && caseObject->match) || (conditionIsNull && !caseObject->match))
+		{
+			if (caseObject->command->type == TSMAP_KEEP)
+				result = LexizeExecTSElement(ld, token, caseObject->condition);
+			else
+				result = LexizeExecTSElement(ld, token, caseObject->command);
+		}
+		else if (caseObject->elsebranch)
+			result = LexizeExecTSElement(ld, token, caseObject->elsebranch);
+	}
+	else if (config->type == TSMAP_EXPRESSION)
+	{
+		TSLexeme   *resLeft = NULL;
+		TSLexeme   *resRight = NULL;
+		TSMapElement *relatedRuleTmp = NULL;
+		TSMapExpression *expression = config->value.objectExpression;
+
+		if (expression->operator != TSMAP_OP_MAP && expression->operator != TSMAP_OP_COMMA)
+		{
+			if (ld->debugContext)
+			{
+				relatedRuleTmp = palloc0(sizeof(TSMapElement));
+				relatedRuleTmp->parent = NULL;
+				relatedRuleTmp->type = TSMAP_EXPRESSION;
+				relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+				relatedRuleTmp->value.objectExpression->operator = expression->operator;
+			}
 
-	if (list->head)
-		list->head = list->head->next;
+			resLeft = LexizeExecTSElement(ld, token, expression->left);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->left = token->relatedRule;
 
-	if (list->head == NULL)
-		list->tail = NULL;
+			resRight = LexizeExecTSElement(ld, token, expression->right);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->right = token->relatedRule;
+		}
 
-	return res;
-}
+		switch (expression->operator)
+		{
+			case TSMAP_OP_UNION:
+				result = TSLexemeUnion(resLeft, resRight);
+				break;
+			case TSMAP_OP_EXCEPT:
+				result = TSLexemeExcept(resLeft, resRight);
+				break;
+			case TSMAP_OP_INTERSECT:
+				result = TSLexemeIntersect(resLeft, resRight);
+				break;
+			case TSMAP_OP_MAP:
+			case TSMAP_OP_COMMA:
+				result = TSLexemeMap(ld, token, expression);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains invalid expression operator.")));
+				break;
+		}
 
-static void
-LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
-{
-	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+		if (ld->debugContext && relatedRuleTmp != NULL)
+			token->relatedRule = relatedRuleTmp;
+	}
 
-	newpl->type = type;
-	newpl->lemm = lemm;
-	newpl->lenlemm = lenlemm;
-	LPLAddTail(&ld->towork, newpl);
-	ld->curSub = ld->towork.tail;
+	if (!LexemesBufferContains(&ld->buffer, config, token))
+		LexemesBufferAdd(&ld->buffer, config, token, result);
+
+	return result;
 }
 
-static void
-RemoveHead(LexizeData *ld)
+/*-------------------
+ * LexizeExec and helpers functions
+ *-------------------
+ */
+
+/*
+ * Processing of EOF-like token.
+ * Return all temporary results if any are saved.
+ */
+static TSLexeme *
+LexizeExecFinishProcessing(LexizeData *ld)
 {
-	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+	int			i;
+	TSLexeme   *res = NULL;
+
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		TSLexeme   *last_res = res;
 
-	ld->posDict = 0;
+		res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+		if (last_res)
+			pfree(last_res);
+	}
+
+	return res;
 }
 
-static void
-setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+/*
+ * Get last accepted result of the phrase-dictionary
+ */
+static TSLexeme *
+LexizeExecGetPreviousResults(LexizeData *ld)
 {
-	if (correspondLexem)
-	{
-		*correspondLexem = ld->waste.head;
-	}
-	else
-	{
-		ParsedLex  *tmp,
-				   *ptr = ld->waste.head;
+	int			i;
+	TSLexeme   *res = NULL;
 
-		while (ptr)
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		if (!ld->dslist.states[i].processed)
 		{
-			tmp = ptr->next;
-			pfree(ptr);
-			ptr = tmp;
+			TSLexeme   *last_res = res;
+
+			res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+			if (last_res)
+				pfree(last_res);
 		}
 	}
-	ld->waste.head = ld->waste.tail = NULL;
+
+	return res;
 }
 
+/*
+ * Remove all dictionary states which wasn't used for current token
+ */
 static void
-moveToWaste(LexizeData *ld, ParsedLex *stop)
+LexizeExecClearDictStates(LexizeData *ld)
 {
-	bool		go = true;
+	int			i;
 
-	while (ld->towork.head && go)
+	for (i = 0; i < ld->dslist.listLength; i++)
 	{
-		if (ld->towork.head == stop)
+		if (!ld->dslist.states[i].processed)
 		{
-			ld->curSub = stop->next;
-			go = false;
+			DictStateListRemove(&ld->dslist, ld->dslist.states[i].relatedDictionary);
+			i = 0;
 		}
-		RemoveHead(ld);
 	}
 }
 
-static void
-setNewTmpRes(LexizeData *ld, ParsedLex *lex, TSLexeme *res)
+/*
+ * Check if there are any dictionaries that didn't processed current token
+ */
+static bool
+LexizeExecNotProcessedDictStates(LexizeData *ld)
 {
-	if (ld->tmpRes)
-	{
-		TSLexeme   *ptr;
+	int			i;
 
-		for (ptr = ld->tmpRes; ptr->lexeme; ptr++)
-			pfree(ptr->lexeme);
-		pfree(ld->tmpRes);
-	}
-	ld->tmpRes = res;
-	ld->lastRes = lex;
+	for (i = 0; i < ld->dslist.listLength; i++)
+		if (!ld->dslist.states[i].processed)
+			return true;
+
+	return false;
 }
 
+/*
+ * Do a lexize processing for a towork queue in LexizeData
+ */
 static TSLexeme *
 LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 {
+	ParsedLex  *token;
+	TSMapElement *config;
+	TSLexeme   *res = NULL;
+	TSLexeme   *prevIterationResult = NULL;
+	bool		removeHead = false;
+	bool		resetSkipDictionary = false;
+	bool		accepted = false;
 	int			i;
-	ListDictionary *map;
-	TSDictionaryCacheEntry *dict;
-	TSLexeme   *res;
 
-	if (ld->curDictId == InvalidOid)
+	for (i = 0; i < ld->dslist.listLength; i++)
+		ld->dslist.states[i].processed = false;
+	if (ld->skipDictionary != InvalidOid)
+		resetSkipDictionary = true;
+
+	token = ld->towork.head;
+	if (token == NULL)
 	{
-		/*
-		 * usual mode: dictionary wants only one word, but we should keep in
-		 * mind that we should go through all stack
-		 */
+		setCorrLex(ld, correspondLexem);
+		return NULL;
+	}
 
-		while (ld->towork.head)
+	if (token->type >= ld->cfg->lenmap)
+	{
+		removeHead = true;
+	}
+	else
+	{
+		config = ld->cfg->map[token->type];
+		if (config != NULL)
+		{
+			res = LexizeExecTSElement(ld, token, config);
+			prevIterationResult = LexizeExecGetPreviousResults(ld);
+			removeHead = prevIterationResult == NULL;
+		}
+		else
 		{
-			ParsedLex  *curVal = ld->towork.head;
-			char	   *curValLemm = curVal->lemm;
-			int			curValLenLemm = curVal->lenlemm;
+			removeHead = true;
+			if (token->type == 0)	/* Processing EOF-like token */
+			{
+				res = LexizeExecFinishProcessing(ld);
+				prevIterationResult = NULL;
+			}
+		}
 
-			map = ld->cfg->map + curVal->type;
+		if (LexizeExecNotProcessedDictStates(ld) && (token->type == 0 || config != NULL))	/* Rollback processing */
+		{
+			int			i;
+			ListParsedLex *intermediateTokens = NULL;
+			ListParsedLex *acceptedTokens = NULL;
 
-			if (curVal->type == 0 || curVal->type >= ld->cfg->lenmap || map->len == 0)
+			for (i = 0; i < ld->dslist.listLength; i++)
 			{
-				/* skip this type of lexeme */
-				RemoveHead(ld);
-				continue;
+				if (!ld->dslist.states[i].processed)
+				{
+					intermediateTokens = &ld->dslist.states[i].intermediateTokens;
+					acceptedTokens = &ld->dslist.states[i].acceptedTokens;
+					if (prevIterationResult == NULL)
+						ld->skipDictionary = ld->dslist.states[i].relatedDictionary;
+				}
 			}
 
-			for (i = ld->posDict; i < map->len; i++)
+			if (intermediateTokens && intermediateTokens->head)
 			{
-				dict = lookup_ts_dictionary_cache(map->dictIds[i]);
-
-				ld->dictState.isend = ld->dictState.getnext = false;
-				ld->dictState.private_state = NULL;
-				res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-																 &(dict->lexize),
-																 PointerGetDatum(dict->dictData),
-																 PointerGetDatum(curValLemm),
-																 Int32GetDatum(curValLenLemm),
-																 PointerGetDatum(&ld->dictState)
-																 ));
-
-				if (ld->dictState.getnext)
+				ParsedLex  *head = ld->towork.head;
+
+				ld->towork.head = intermediateTokens->head;
+				intermediateTokens->tail->next = head;
+				head->next = NULL;
+				ld->towork.tail = head;
+				removeHead = false;
+				LPLClear(&ld->waste);
+				if (acceptedTokens && acceptedTokens->head)
 				{
-					/*
-					 * dictionary wants next word, so setup and store current
-					 * position and go to multiword mode
-					 */
-
-					ld->curDictId = DatumGetObjectId(map->dictIds[i]);
-					ld->posDict = i + 1;
-					ld->curSub = curVal->next;
-					if (res)
-						setNewTmpRes(ld, curVal, res);
-					return LexizeExec(ld, correspondLexem);
+					ld->waste.head = acceptedTokens->head;
+					ld->waste.tail = acceptedTokens->tail;
 				}
+			}
+			ResultStorageClearLexemes(&ld->delayedResults);
+			if (config != NULL)
+				res = NULL;
+		}
 
-				if (!res)		/* dictionary doesn't know this lexeme */
-					continue;
+		if (config != NULL)
+			LexizeExecClearDictStates(ld);
+		else if (token->type == 0)
+			DictStateListClear(&ld->dslist);
+	}
 
-				if (res->flags & TSL_FILTER)
-				{
-					curValLemm = res->lexeme;
-					curValLenLemm = strlen(res->lexeme);
-					continue;
-				}
+	if (prevIterationResult)
+		res = prevIterationResult;
+	else
+	{
+		int			i;
 
-				RemoveHead(ld);
-				setCorrLex(ld, correspondLexem);
-				return res;
+		for (i = 0; i < ld->dslist.listLength; i++)
+		{
+			if (ld->dslist.states[i].storeToAccepted)
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].acceptedTokens, token);
+				accepted = true;
+				ld->dslist.states[i].storeToAccepted = false;
+			}
+			else
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].intermediateTokens, token);
 			}
-
-			RemoveHead(ld);
 		}
 	}
-	else
-	{							/* curDictId is valid */
-		dict = lookup_ts_dictionary_cache(ld->curDictId);
 
+	if (removeHead)
+		RemoveHead(ld);
+
+	if (ld->dslist.listLength > 0)
+	{
 		/*
-		 * Dictionary ld->curDictId asks  us about following words
+		 * There is at least one thesaurus dictionary in the middle of
+		 * processing. Delay return of the result to avoid wrong lexemes in
+		 * case of thesaurus phrase rejection.
 		 */
+		ResultStorageAdd(&ld->delayedResults, token, res);
+		if (accepted)
+			ResultStorageMoveToAccepted(&ld->delayedResults);
 
-		while (ld->curSub)
+		/*
+		 * Current value of res should not be cleared, because it is stored in
+		 * LexemesBuffer
+		 */
+		res = NULL;
+	}
+	else
+	{
+		if (ld->towork.head == NULL)
 		{
-			ParsedLex  *curVal = ld->curSub;
-
-			map = ld->cfg->map + curVal->type;
-
-			if (curVal->type != 0)
-			{
-				bool		dictExists = false;
-
-				if (curVal->type >= ld->cfg->lenmap || map->len == 0)
-				{
-					/* skip this type of lexeme */
-					ld->curSub = curVal->next;
-					continue;
-				}
+			TSLexeme   *oldAccepted = ld->delayedResults.accepted;
 
-				/*
-				 * We should be sure that current type of lexeme is recognized
-				 * by our dictionary: we just check is it exist in list of
-				 * dictionaries ?
-				 */
-				for (i = 0; i < map->len && !dictExists; i++)
-					if (ld->curDictId == DatumGetObjectId(map->dictIds[i]))
-						dictExists = true;
-
-				if (!dictExists)
-				{
-					/*
-					 * Dictionary can't work with current tpe of lexeme,
-					 * return to basic mode and redo all stored lexemes
-					 */
-					ld->curDictId = InvalidOid;
-					return LexizeExec(ld, correspondLexem);
-				}
-			}
+			ld->delayedResults.accepted = TSLexemeUnionOpt(ld->delayedResults.accepted, ld->delayedResults.lexemes, true);
+			if (oldAccepted)
+				pfree(oldAccepted);
+		}
 
-			ld->dictState.isend = (curVal->type == 0) ? true : false;
-			ld->dictState.getnext = false;
+		/*
+		 * Add accepted delayed results to the output of the parsing. All
+		 * lexemes returned during thesaurus phrase processing should be
+		 * returned simultaneously, since all phrase tokens are processed as
+		 * one.
+		 */
+		if (ld->delayedResults.accepted != NULL)
+		{
+			/*
+			 * Previous value of res should not be cleared, because it is
+			 * stored in LexemesBuffer
+			 */
+			res = TSLexemeUnionOpt(ld->delayedResults.accepted, res, prevIterationResult == NULL);
 
-			res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-															 &(dict->lexize),
-															 PointerGetDatum(dict->dictData),
-															 PointerGetDatum(curVal->lemm),
-															 Int32GetDatum(curVal->lenlemm),
-															 PointerGetDatum(&ld->dictState)
-															 ));
+			ResultStorageClearLexemes(&ld->delayedResults);
+			ResultStorageClearAccepted(&ld->delayedResults);
+		}
+		setCorrLex(ld, correspondLexem);
+	}
 
-			if (ld->dictState.getnext)
-			{
-				/* Dictionary wants one more */
-				ld->curSub = curVal->next;
-				if (res)
-					setNewTmpRes(ld, curVal, res);
-				continue;
-			}
+	if (resetSkipDictionary)
+		ld->skipDictionary = InvalidOid;
 
-			if (res || ld->tmpRes)
-			{
-				/*
-				 * Dictionary normalizes lexemes, so we remove from stack all
-				 * used lexemes, return to basic mode and redo end of stack
-				 * (if it exists)
-				 */
-				if (res)
-				{
-					moveToWaste(ld, ld->curSub);
-				}
-				else
-				{
-					res = ld->tmpRes;
-					moveToWaste(ld, ld->lastRes);
-				}
+	res = TSLexemeFilterMulti(res);
+	if (res)
+		res = TSLexemeRemoveDuplications(res);
 
-				/* reset to initial state */
-				ld->curDictId = InvalidOid;
-				ld->posDict = 0;
-				ld->lastRes = NULL;
-				ld->tmpRes = NULL;
-				setCorrLex(ld, correspondLexem);
-				return res;
-			}
+	/*
+	 * Copy result since it may be stored in LexemesBuffere and removed at the
+	 * next step.
+	 */
+	if (res)
+	{
+		TSLexeme   *oldRes = res;
+		int			resSize = TSLexemeGetSize(res);
 
-			/*
-			 * Dict don't want next lexem and didn't recognize anything, redo
-			 * from ld->towork.head
-			 */
-			ld->curDictId = InvalidOid;
-			return LexizeExec(ld, correspondLexem);
-		}
+		res = palloc0(sizeof(TSLexeme) * (resSize + 1));
+		memcpy(res, oldRes, sizeof(TSLexeme) * resSize);
 	}
 
-	setCorrLex(ld, correspondLexem);
-	return NULL;
+	LexemesBufferClear(&ld->buffer);
+	return res;
 }
 
+/*-------------------
+ * ts_parse API functions
+ *-------------------
+ */
+
 /*
  * Parse string and lexize words.
  *
@@ -357,7 +1473,7 @@ LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 void
 parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
@@ -375,36 +1491,42 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		while ((norms = LexizeExec(&ldata, NULL)) != NULL)
 		{
-			TSLexeme   *ptr = norms;
+			TSLexeme   *ptr;
+
+			ptr = norms;
 
 			prs->pos++;			/* set pos */
 
@@ -429,14 +1551,246 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 			}
 			pfree(norms);
 		}
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
 
+/*-------------------
+ * ts_debug and helper functions
+ *-------------------
+ */
+
+/*
+ * Free memory occupied by temporary TSMapElement
+ */
+
+static void
+ts_debug_free_rule(TSMapElement *element)
+{
+	if (element != NULL && element->type == TSMAP_EXPRESSION)
+	{
+		ts_debug_free_rule(element->value.objectExpression->left);
+		ts_debug_free_rule(element->value.objectExpression->right);
+		pfree(element->value.objectExpression);
+		pfree(element);
+	}
+}
+
+/*
+ * Initialize SRF context and text parser for ts_debug execution.
+ */
+static void
+ts_debug_init(Oid cfgId, text *inputText, FunctionCallInfo fcinfo)
+{
+	TupleDesc	tupdesc;
+	char	   *buf;
+	int			buflen;
+	FuncCallContext *funcctx;
+	MemoryContext oldcontext;
+	TSDebugContext *context;
+
+	funcctx = SRF_FIRSTCALL_INIT();
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+	buf = text_to_cstring(inputText);
+	buflen = strlen(buf);
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("function returning record called in context "
+						"that cannot accept type record")));
+
+	funcctx->user_fctx = palloc0(sizeof(TSDebugContext));
+	funcctx->attinmeta = TupleDescGetAttInMetadata(tupdesc);
+
+	context = funcctx->user_fctx;
+	context->cfg = lookup_ts_config_cache(cfgId);
+	context->prsobj = lookup_ts_parser_cache(context->cfg->prsId);
+
+	context->tokenTypes = (LexDescr *) DatumGetPointer(OidFunctionCall1(context->prsobj->lextypeOid,
+																		(Datum) 0));
+
+	context->prsdata = (void *) DatumGetPointer(FunctionCall2(&context->prsobj->prsstart,
+															  PointerGetDatum(buf),
+															  Int32GetDatum(buflen)));
+	LexizeInit(&context->ldata, context->cfg);
+	context->ldata.debugContext = true;
+	context->tokentype = 1;
+
+	MemoryContextSwitchTo(oldcontext);
+}
+
+/*
+ * Get one token from input text and add it to processing queue.
+ */
+static void
+ts_debug_get_token(FuncCallContext *funcctx)
+{
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+	int			lenlemm;
+	char	   *lemm = NULL;
+
+	context = funcctx->user_fctx;
+
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+	context->tokentype = DatumGetInt32(FunctionCall3(&(context->prsobj->prstoken),
+													 PointerGetDatum(context->prsdata),
+													 PointerGetDatum(&lemm),
+													 PointerGetDatum(&lenlemm)));
+
+	if (context->tokentype > 0 && lenlemm >= MAXSTRLEN)
+	{
+#ifdef IGNORE_LONGLEXEME
+		ereport(NOTICE,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#else
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#endif
+	}
+
+	LexizeAddLemm(&context->ldata, context->tokentype, lemm, lenlemm);
+	MemoryContextSwitchTo(oldcontext);
+}
+
 /*
+ * Parse text and print debug information, such as token type, dictionary map
+ * configuration, selected command and lexemes for each token.
+ * Arguments: regconfiguration(Oid) cfgId, text *inputText
+ */
+Datum
+ts_debug(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		Oid			cfgId = PG_GETARG_OID(0);
+		text	   *inputText = PG_GETARG_TEXT_P(1);
+
+		ts_debug_init(cfgId, inputText, fcinfo);
+	}
+
+	funcctx = SRF_PERCALL_SETUP();
+	context = funcctx->user_fctx;
+
+	while (context->tokentype > 0 && context->leftTokens == NULL)
+	{
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+		ts_debug_get_token(funcctx);
+
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	while (context->leftTokens == NULL && context->ldata.towork.head != NULL)
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+	if (context->leftTokens && context->leftTokens && context->leftTokens->type > 0)
+	{
+		HeapTuple	tuple;
+		Datum		result;
+		char	  **values;
+		ParsedLex  *lex = context->leftTokens;
+		StringInfo	str = NULL;
+		TSLexeme   *ptr;
+
+		values = palloc0(sizeof(char *) * 7);
+		str = makeStringInfo();
+		initStringInfo(str);
+
+		values[0] = context->tokenTypes[lex->type - 1].alias;
+		values[1] = context->tokenTypes[lex->type - 1].descr;
+
+		values[2] = palloc0(sizeof(char) * (lex->lenlemm + 1));
+		memcpy(values[2], lex->lemm, sizeof(char) * lex->lenlemm);
+
+		initStringInfo(str);
+		appendStringInfoChar(str, '{');
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			Oid *dictionaries = TSMapGetDictionaries(context->ldata.cfg->map[lex->type]);
+			Oid *currentDictionary = NULL;
+			for (currentDictionary = dictionaries; *currentDictionary != InvalidOid; currentDictionary++)
+			{
+				if (currentDictionary != dictionaries)
+					appendStringInfoChar(str, ',');
+
+				TSMapPrintDictName(*currentDictionary, str);
+			}
+		}
+		appendStringInfoChar(str, '}');
+		values[3] = str->data;
+
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			initStringInfo(str);
+			TSMapPrintElement(context->ldata.cfg->map[lex->type], str);
+			values[4] = str->data;
+
+			initStringInfo(str);
+			if (lex->relatedRule)
+			{
+				TSMapPrintElement(lex->relatedRule, str);
+				values[5] = str->data;
+				str = makeStringInfo();
+				initStringInfo(str);
+				ts_debug_free_rule(lex->relatedRule);
+				lex->relatedRule = NULL;
+			}
+		}
+
+		initStringInfo(str);
+		ptr = context->savedLexemes;
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '{');
+
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr != context->savedLexemes)
+				appendStringInfoString(str, ", ");
+			appendStringInfoString(str, ptr->lexeme);
+			ptr++;
+		}
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '}');
+		if (context->savedLexemes)
+			values[6] = str->data;
+		else
+			values[6] = NULL;
+
+		tuple = BuildTupleFromCStrings(funcctx->attinmeta, values);
+		result = HeapTupleGetDatum(tuple);
+
+		context->leftTokens = lex->next;
+		pfree(lex);
+		if (context->leftTokens == NULL && context->savedLexemes)
+			pfree(context->savedLexemes);
+
+		SRF_RETURN_NEXT(funcctx, result);
+	}
+
+	FunctionCall1(&(context->prsobj->prsend), PointerGetDatum(context->prsdata));
+	SRF_RETURN_DONE(funcctx);
+}
+
+/*-------------------
  * Headline framework
+ *-------------------
  */
+
 static void
 hladdword(HeadlineParsedText *prs, char *buf, int buflen, int type)
 {
@@ -532,12 +1886,12 @@ addHLParsedLex(HeadlineParsedText *prs, TSQuery query, ParsedLex *lexs, TSLexeme
 void
 hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
 	TSLexeme   *norms;
-	ParsedLex  *lexs;
+	ParsedLex  *lexs = NULL;
 	TSConfigCacheEntry *cfg;
 	TSParserCacheEntry *prsobj;
 	void	   *prsdata;
@@ -551,32 +1905,36 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		do
 		{
@@ -587,9 +1945,10 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 			}
 			else
 				addHLParsedLex(prs, query, lexs, NULL);
+			lexs = NULL;
 		} while (norms);
 
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
@@ -642,14 +2001,14 @@ generateHeadline(HeadlineParsedText *prs)
 			}
 			else if (!wrd->skip)
 			{
-				if (wrd->selected)
+				if (wrd->selected && (wrd == prs->words || !(wrd - 1)->selected))
 				{
 					memcpy(ptr, prs->startsel, prs->startsellen);
 					ptr += prs->startsellen;
 				}
 				memcpy(ptr, wrd->word, wrd->len);
 				ptr += wrd->len;
-				if (wrd->selected)
+				if (wrd->selected && ((wrd + 1 - prs->words) == prs->curwords || !(wrd + 1)->selected))
 				{
 					memcpy(ptr, prs->stopsel, prs->stopsellen);
 					ptr += prs->stopsellen;
diff --git a/src/backend/tsearch/ts_utils.c b/src/backend/tsearch/ts_utils.c
index f6e03ae..0dd846b 100644
--- a/src/backend/tsearch/ts_utils.c
+++ b/src/backend/tsearch/ts_utils.c
@@ -20,7 +20,6 @@
 #include "tsearch/ts_locale.h"
 #include "tsearch/ts_utils.h"
 
-
 /*
  * Given the base name and extension of a tsearch config file, return
  * its full path name.  The base name is assumed to be user-supplied,
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 2b38178..f251e83 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -828,11 +828,10 @@ static const struct cachedesc cacheinfo[] = {
 	},
 	{TSConfigMapRelationId,		/* TSCONFIGMAP */
 		TSConfigMapIndexId,
-		3,
+		2,
 		{
 			Anum_pg_ts_config_map_mapcfg,
 			Anum_pg_ts_config_map_maptokentype,
-			Anum_pg_ts_config_map_mapseqno,
 			0
 		},
 		2
diff --git a/src/backend/utils/cache/ts_cache.c b/src/backend/utils/cache/ts_cache.c
index f11cba4..c0f98ba 100644
--- a/src/backend/utils/cache/ts_cache.c
+++ b/src/backend/utils/cache/ts_cache.c
@@ -39,6 +39,7 @@
 #include "catalog/pg_ts_template.h"
 #include "commands/defrem.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/catcache.h"
 #include "utils/fmgroids.h"
@@ -51,13 +52,12 @@
 
 
 /*
- * MAXTOKENTYPE/MAXDICTSPERTT are arbitrary limits on the workspace size
+ * MAXTOKENTYPE is arbitrary limits on the workspace size
  * used in lookup_ts_config_cache().  We could avoid hardwiring a limit
  * by making the workspace dynamically enlargeable, but it seems unlikely
  * to be worth the trouble.
  */
-#define MAXTOKENTYPE	256
-#define MAXDICTSPERTT	100
+#define MAXTOKENTYPE		256
 
 
 static HTAB *TSParserCacheHash = NULL;
@@ -418,11 +418,10 @@ lookup_ts_config_cache(Oid cfgId)
 		ScanKeyData mapskey;
 		SysScanDesc mapscan;
 		HeapTuple	maptup;
-		ListDictionary maplists[MAXTOKENTYPE + 1];
-		Oid			mapdicts[MAXDICTSPERTT];
+		TSMapElement *mapconfigs[MAXTOKENTYPE + 1];
 		int			maxtokentype;
-		int			ndicts;
 		int			i;
+		TSMapElement *tmpConfig;
 
 		tp = SearchSysCache1(TSCONFIGOID, ObjectIdGetDatum(cfgId));
 		if (!HeapTupleIsValid(tp))
@@ -453,8 +452,8 @@ lookup_ts_config_cache(Oid cfgId)
 			if (entry->map)
 			{
 				for (i = 0; i < entry->lenmap; i++)
-					if (entry->map[i].dictIds)
-						pfree(entry->map[i].dictIds);
+					if (entry->map[i])
+						TSMapElementFree(entry->map[i]);
 				pfree(entry->map);
 			}
 		}
@@ -468,13 +467,11 @@ lookup_ts_config_cache(Oid cfgId)
 		/*
 		 * Scan pg_ts_config_map to gather dictionary list for each token type
 		 *
-		 * Because the index is on (mapcfg, maptokentype, mapseqno), we will
-		 * see the entries in maptokentype order, and in mapseqno order for
-		 * each token type, even though we didn't explicitly ask for that.
+		 * Because the index is on (mapcfg, maptokentype), we will see the
+		 * entries in maptokentype order even though we didn't explicitly ask
+		 * for that.
 		 */
-		MemSet(maplists, 0, sizeof(maplists));
 		maxtokentype = 0;
-		ndicts = 0;
 
 		ScanKeyInit(&mapskey,
 					Anum_pg_ts_config_map_mapcfg,
@@ -486,6 +483,7 @@ lookup_ts_config_cache(Oid cfgId)
 		mapscan = systable_beginscan_ordered(maprel, mapidx,
 											 NULL, 1, &mapskey);
 
+		memset(mapconfigs, 0, sizeof(mapconfigs));
 		while ((maptup = systable_getnext_ordered(mapscan, ForwardScanDirection)) != NULL)
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
@@ -495,51 +493,27 @@ lookup_ts_config_cache(Oid cfgId)
 				elog(ERROR, "maptokentype value %d is out of range", toktype);
 			if (toktype < maxtokentype)
 				elog(ERROR, "maptokentype entries are out of order");
-			if (toktype > maxtokentype)
-			{
-				/* starting a new token type, but first save the prior data */
-				if (ndicts > 0)
-				{
-					maplists[maxtokentype].len = ndicts;
-					maplists[maxtokentype].dictIds = (Oid *)
-						MemoryContextAlloc(CacheMemoryContext,
-										   sizeof(Oid) * ndicts);
-					memcpy(maplists[maxtokentype].dictIds, mapdicts,
-						   sizeof(Oid) * ndicts);
-				}
-				maxtokentype = toktype;
-				mapdicts[0] = cfgmap->mapdict;
-				ndicts = 1;
-			}
-			else
-			{
-				/* continuing data for current token type */
-				if (ndicts >= MAXDICTSPERTT)
-					elog(ERROR, "too many pg_ts_config_map entries for one token type");
-				mapdicts[ndicts++] = cfgmap->mapdict;
-			}
+
+			maxtokentype = toktype;
+			tmpConfig = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			mapconfigs[maxtokentype] = TSMapMoveToMemoryContext(tmpConfig, CacheMemoryContext);
+			TSMapElementFree(tmpConfig);
+			tmpConfig = NULL;
 		}
 
 		systable_endscan_ordered(mapscan);
 		index_close(mapidx, AccessShareLock);
 		heap_close(maprel, AccessShareLock);
 
-		if (ndicts > 0)
+		if (maxtokentype > 0)
 		{
-			/* save the last token type's dictionaries */
-			maplists[maxtokentype].len = ndicts;
-			maplists[maxtokentype].dictIds = (Oid *)
-				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(Oid) * ndicts);
-			memcpy(maplists[maxtokentype].dictIds, mapdicts,
-				   sizeof(Oid) * ndicts);
-			/* and save the overall map */
+			/* save the overall map */
 			entry->lenmap = maxtokentype + 1;
-			entry->map = (ListDictionary *)
+			entry->map = (TSMapElement * *)
 				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(ListDictionary) * entry->lenmap);
-			memcpy(entry->map, maplists,
-				   sizeof(ListDictionary) * entry->lenmap);
+								   sizeof(TSMapElement *) * entry->lenmap);
+			memcpy(entry->map, mapconfigs,
+				   sizeof(TSMapElement *) * entry->lenmap);
 		}
 
 		entry->isvalid = true;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 93c869f..ff936c3 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -14255,15 +14255,29 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 	PQclear(res);
 
 	resetPQExpBuffer(query);
-	appendPQExpBuffer(query,
-					  "SELECT\n"
-					  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
-					  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
-					  "FROM pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE m.mapcfg = '%u'\n"
-					  "ORDER BY m.mapcfg, m.maptokentype, m.mapseqno",
-					  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+
+	if (fout->remoteVersion >= 110000)
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+	else
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, m.mapseqno\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
 	ntups = PQntuples(res);
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 410131e..95e3a89 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -4646,25 +4646,41 @@ describeOneTSConfig(const char *oid, const char *nspname, const char *cfgname,
 
 	initPQExpBuffer(&buf);
 
-	printfPQExpBuffer(&buf,
-					  "SELECT\n"
-					  "  ( SELECT t.alias FROM\n"
-					  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
-					  "  pg_catalog.btrim(\n"
-					  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
-					  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
-					  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
-					  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
-					  "    ) :: pg_catalog.text,\n"
-					  "  '{}') AS \"%s\"\n"
-					  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
-					  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
-					  "ORDER BY 1;",
-					  gettext_noop("Token"),
-					  gettext_noop("Dictionaries"),
-					  oid);
+	if (pset.sversion >= 110000)
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  " dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+	else
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  "  pg_catalog.btrim(\n"
+						  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
+						  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
+						  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
+						  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
+						  "    ) :: pg_catalog.text,\n"
+						  "  '{}') AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+
 
 	res = PSQLexec(buf.data);
 	termPQExpBuffer(&buf);
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index 42499e2..79fb3f0 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -262,7 +262,7 @@ DECLARE_UNIQUE_INDEX(pg_ts_config_cfgname_index, 3608, on pg_ts_config using btr
 DECLARE_UNIQUE_INDEX(pg_ts_config_oid_index, 3712, on pg_ts_config using btree(oid oid_ops));
 #define TSConfigOidIndexId	3712
 
-DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops, mapseqno int4_ops));
+DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops));
 #define TSConfigMapIndexId	3609
 
 DECLARE_UNIQUE_INDEX(pg_ts_dict_dictname_index, 3604, on pg_ts_dict using btree(dictname name_ops, dictnamespace oid_ops));
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index f3b9c33..775f92e 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -9020,6 +9020,19 @@
   prorettype => 'regconfig', proargtypes => '',
   prosrc => 'get_current_ts_config' },
 
+{ oid => '8891', descr => 'returns text representation of dictionary configuration map',
+  proname => 'dictionary_mapping_to_text', provolatile => 's',
+  prorettype => 'text', proargtypes => 'regconfig int4',
+  prosrc => 'dictionary_mapping_to_text' },
+
+{ oid => '8892', descr => 'debug function for a text search configuration',
+  proname => 'ts_debug', provolatile => 's',
+  prorettype => 'record', proargtypes => 'regconfig text',
+  proallargtypes => '{regconfig,text,text,text,text,_regdictionary,text,text,_text}',
+  proargmodes => '{i,i,o,o,o,o,o,o,o}',
+  proargnames => '{ftsconfig,inputext,alias,description,token,dictionaries,configuration,command,lexemes}',
+  prosrc => 'ts_debug' },
+
 { oid => '3736', descr => 'I/O',
   proname => 'regconfigin', provolatile => 's', prorettype => 'regconfig',
   proargtypes => 'cstring', prosrc => 'regconfigin' },
diff --git a/src/include/catalog/pg_ts_config_map.dat b/src/include/catalog/pg_ts_config_map.dat
index 090a1ca..4aa3612 100644
--- a/src/include/catalog/pg_ts_config_map.dat
+++ b/src/include/catalog/pg_ts_config_map.dat
@@ -12,24 +12,24 @@
 
 [
 
-{ mapcfg => '3748', maptokentype => '1', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '2', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '3', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '4', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '5', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '6', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '7', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '8', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '9', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '10', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '11', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '15', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '16', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '17', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '18', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '19', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '20', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '21', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '22', mapseqno => '1', mapdict => '3765' },
+{ mapcfg => '3748', maptokentype => '1', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '2', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '3', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '4', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '5', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '6', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '7', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '8', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '9', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '10', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '11', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '15', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '16', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '17', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '18', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '19', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '20', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '21', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '22', mapdicts => '[3765]' },
 
 ]
diff --git a/src/include/catalog/pg_ts_config_map.h b/src/include/catalog/pg_ts_config_map.h
index 2120021..aed1b20 100644
--- a/src/include/catalog/pg_ts_config_map.h
+++ b/src/include/catalog/pg_ts_config_map.h
@@ -19,6 +19,7 @@
 #define PG_TS_CONFIG_MAP_H
 
 #include "catalog/genbki.h"
+#include "utils/jsonb.h"
 #include "catalog/pg_ts_config_map_d.h"
 
 /* ----------------
@@ -26,14 +27,91 @@
  *		typedef struct FormData_pg_ts_config_map
  * ----------------
  */
+#define TSConfigMapRelationId	3603
+
+/*
+ * Create a typedef in order to use same type name in
+ * generated DB initialization script and C source code
+ */
+typedef Jsonb jsonb;
+
 CATALOG(pg_ts_config_map,3603,TSConfigMapRelationId) BKI_WITHOUT_OIDS
 {
 	Oid			mapcfg;			/* OID of configuration owning this entry */
 	int32		maptokentype;	/* token type from parser */
-	int32		mapseqno;		/* order in which to consult dictionaries */
-	Oid			mapdict;		/* dictionary to consult */
+
+	/*
+	 * mapdicts is the only one variable-length field so it is safe to use
+	 * it directly, without hiding from C interface.
+	 */
+	jsonb		mapdicts;		/* dictionary map Jsonb representation */
 } FormData_pg_ts_config_map;
 
 typedef FormData_pg_ts_config_map *Form_pg_ts_config_map;
 
+/*
+ * Element of the mapping expression tree
+ */
+typedef struct TSMapElement
+{
+	int			type; /* Type of the element */
+	union
+	{
+		struct TSMapExpression *objectExpression;
+		struct TSMapCase *objectCase;
+		Oid			objectDictionary;
+		void	   *object;
+	} value;
+	struct TSMapElement *parent; /* Parent in the expression tree */
+} TSMapElement;
+
+/*
+ * Representation of expression with operator and two operands
+ */
+typedef struct TSMapExpression
+{
+	int			operator;
+	TSMapElement *left;
+	TSMapElement *right;
+} TSMapExpression;
+
+/*
+ * Representation of CASE structure inside database
+ */
+typedef struct TSMapCase
+{
+	TSMapElement *condition;
+	TSMapElement *command;
+	TSMapElement *elsebranch;
+	bool		match;	/* If false, NO MATCH is used */
+} TSMapCase;
+
+/* ----------------
+ *		Compiler constants for pg_ts_config_map
+ * ----------------
+ */
+#define Natts_pg_ts_config_map				3
+#define Anum_pg_ts_config_map_mapcfg		1
+#define Anum_pg_ts_config_map_maptokentype	2
+#define Anum_pg_ts_config_map_mapdicts		3
+
+/* ----------------
+ *		Dictionary map operators
+ * ----------------
+ */
+#define TSMAP_OP_MAP			1
+#define TSMAP_OP_UNION			2
+#define TSMAP_OP_EXCEPT			3
+#define TSMAP_OP_INTERSECT		4
+#define TSMAP_OP_COMMA			5
+
+/* ----------------
+ *		TSMapElement object types
+ * ----------------
+ */
+#define TSMAP_EXPRESSION	1
+#define TSMAP_CASE			2
+#define TSMAP_DICTIONARY	3
+#define TSMAP_KEEP			4
+
 #endif							/* PG_TS_CONFIG_MAP_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index defdbae..0460cc5 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -388,6 +388,9 @@ typedef enum NodeTag
 	T_CreateEnumStmt,
 	T_CreateRangeStmt,
 	T_AlterEnumStmt,
+	T_DictMapExprElem,
+	T_DictMapElem,
+	T_DictMapCase,
 	T_AlterTSDictionaryStmt,
 	T_AlterTSConfigurationStmt,
 	T_CreateFdwStmt,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index c840538..c88f658 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3454,6 +3454,50 @@ typedef enum AlterTSConfigType
 	ALTER_TSCONFIG_DROP_MAPPING
 } AlterTSConfigType;
 
+/*
+ * TS Configuration expression tree element's types
+ */
+typedef enum DictMapElemType
+{
+	DICT_MAP_CASE,
+	DICT_MAP_EXPRESSION,
+	DICT_MAP_KEEP,
+	DICT_MAP_DICTIONARY
+} DictMapElemType;
+
+/*
+ * TS Configuration expression tree abstract element
+ */
+typedef struct DictMapElem
+{
+	NodeTag		type;
+	int8		kind;			/* See DictMapElemType */
+	void	   *data;			/* Type should be detected by kind value */
+} DictMapElem;
+
+/*
+ * TS Configuration expression tree element with operator and operands
+ */
+typedef struct DictMapExprElem
+{
+	NodeTag		type;
+	DictMapElem *left;
+	DictMapElem *right;
+	int8		oper;
+} DictMapExprElem;
+
+/*
+ * TS Configuration expression tree CASE element
+ */
+typedef struct DictMapCase
+{
+	NodeTag		type;
+	struct DictMapElem *condition;
+	struct DictMapElem *command;
+	struct DictMapElem *elsebranch;
+	bool		match;
+} DictMapCase;
+
 typedef struct AlterTSConfigurationStmt
 {
 	NodeTag		type;
@@ -3466,6 +3510,7 @@ typedef struct AlterTSConfigurationStmt
 	 */
 	List	   *tokentype;		/* list of Value strings */
 	List	   *dicts;			/* list of list of Value strings */
+	DictMapElem *dict_map;		/* tree of the mapping expression */
 	bool		override;		/* if true - remove old variant */
 	bool		replace;		/* if true - replace dictionary by another */
 	bool		missing_ok;		/* for DROP - skip error if missing? */
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 81f758a..e0b790f 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -221,6 +221,7 @@ PG_KEYWORD("is", IS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isnull", ISNULL, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isolation", ISOLATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("join", JOIN, TYPE_FUNC_NAME_KEYWORD)
+PG_KEYWORD("keep", KEEP, RESERVED_KEYWORD)
 PG_KEYWORD("key", KEY, UNRESERVED_KEYWORD)
 PG_KEYWORD("label", LABEL, UNRESERVED_KEYWORD)
 PG_KEYWORD("language", LANGUAGE, UNRESERVED_KEYWORD)
@@ -243,6 +244,7 @@ PG_KEYWORD("location", LOCATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("lock", LOCK_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("locked", LOCKED, UNRESERVED_KEYWORD)
 PG_KEYWORD("logged", LOGGED, UNRESERVED_KEYWORD)
+PG_KEYWORD("map", MAP, UNRESERVED_KEYWORD)
 PG_KEYWORD("mapping", MAPPING, UNRESERVED_KEYWORD)
 PG_KEYWORD("match", MATCH, UNRESERVED_KEYWORD)
 PG_KEYWORD("matched", MATCHED, UNRESERVED_KEYWORD)
diff --git a/src/include/tsearch/ts_cache.h b/src/include/tsearch/ts_cache.h
index 410f1d5..4633dd7 100644
--- a/src/include/tsearch/ts_cache.h
+++ b/src/include/tsearch/ts_cache.h
@@ -14,6 +14,7 @@
 #define TS_CACHE_H
 
 #include "utils/guc.h"
+#include "catalog/pg_ts_config_map.h"
 
 
 /*
@@ -66,6 +67,7 @@ typedef struct
 {
 	int			len;
 	Oid		   *dictIds;
+	int32	   *dictOptions;
 } ListDictionary;
 
 typedef struct
@@ -77,7 +79,7 @@ typedef struct
 	Oid			prsId;
 
 	int			lenmap;
-	ListDictionary *map;
+	TSMapElement **map;
 } TSConfigCacheEntry;
 
 
diff --git a/src/include/tsearch/ts_configmap.h b/src/include/tsearch/ts_configmap.h
new file mode 100644
index 0000000..79e6180
--- /dev/null
+++ b/src/include/tsearch/ts_configmap.h
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.h
+ *	  internal representation of text search configuration and utilities for it
+ *
+ * Copyright (c) 1998-2018, PostgreSQL Global Development Group
+ *
+ * src/include/tsearch/ts_utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PG_TS_CONFIGMAP_H_
+#define _PG_TS_CONFIGMAP_H_
+
+#include "utils/jsonb.h"
+#include "catalog/pg_ts_config_map.h"
+
+/*
+ * Configuration storage functions
+ * Provide interface to convert ts_configuration into JSONB and vice versa
+ */
+
+/* Convert TSMapElement structure into JSONB */
+extern Jsonb *TSMapToJsonb(TSMapElement *config);
+
+/* Extract TSMapElement from JSONB formated data */
+extern TSMapElement *JsonbToTSMap(Jsonb *json);
+/* Replace all occurances of oldDict by newDict */
+extern void TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict);
+
+/* Move rule list into specified memory context */
+extern TSMapElement *TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context);
+/* Free all nodes of the rule list */
+extern void TSMapElementFree(TSMapElement *element);
+
+/* Print map in human-readable format */
+extern void TSMapPrintElement(TSMapElement *config, StringInfo result);
+
+/* Print dictionary name for a given Oid */
+extern void TSMapPrintDictName(Oid dictId, StringInfo result);
+
+/* Return all dictionaries used in config */
+extern Oid *TSMapGetDictionaries(TSMapElement *config);
+
+/* Do a deep comparison of two TSMapElements. Doesn't check parents of elements */
+extern bool TSMapElementEquals(TSMapElement *a, TSMapElement *b);
+
+#endif							/* _PG_TS_CONFIGMAP_H_ */
diff --git a/src/include/tsearch/ts_public.h b/src/include/tsearch/ts_public.h
index 0b7a5aa..d970eec 100644
--- a/src/include/tsearch/ts_public.h
+++ b/src/include/tsearch/ts_public.h
@@ -115,6 +115,7 @@ typedef struct
 #define TSL_ADDPOS		0x01
 #define TSL_PREFIX		0x02
 #define TSL_FILTER		0x04
+#define TSL_MULTI		0x08
 
 /*
  * Struct for supporting complex dictionaries like thesaurus.
diff --git a/src/test/regress/expected/oidjoins.out b/src/test/regress/expected/oidjoins.out
index d56c70c..08c2674 100644
--- a/src/test/regress/expected/oidjoins.out
+++ b/src/test/regress/expected/oidjoins.out
@@ -1089,14 +1089,6 @@ WHERE	mapcfg != 0 AND
 ------+--------
 (0 rows)
 
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
- ctid | mapdict 
-------+---------
-(0 rows)
-
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/expected/tsdicts.out b/src/test/regress/expected/tsdicts.out
index 0c1d7c7..512af59 100644
--- a/src/test/regress/expected/tsdicts.out
+++ b/src/test/regress/expected/tsdicts.out
@@ -420,6 +420,105 @@ SELECT ts_lexize('thesaurus', 'one');
  {1}
 (1 row)
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+SELECT to_tsvector('english_union', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'books');
+    to_tsvector     
+--------------------
+ 'book':1 'books':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+SELECT to_tsvector('english_intersect', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'books');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'booking');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+SELECT to_tsvector('english_except', 'book');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_except', 'books');
+ to_tsvector 
+-------------
+ 'books':1
+(1 row)
+
+SELECT to_tsvector('english_except', 'booking');
+ to_tsvector 
+-------------
+ 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+SELECT to_tsvector('english_branches', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -580,6 +679,163 @@ SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a
  'card':3,10 'invit':2,9 'like':6 'look':5 'order':1,8
 (1 row)
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+              to_tsvector              
+---------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                     to_tsvector                     
+-----------------------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5 'supernova':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                              to_tsvector                               
+------------------------------------------------------------------------
+ '1987a':6 'mysterious':2 'of':4 'rings':3 'sn':5 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+            Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |                     Dictionaries                      
+-----------------+-------------------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN MATCH THEN simple UNION thesaurus+
+                 | ELSE simple                                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+      to_tsvector       
+------------------------
+ '12':1 'one':1 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+            to_tsvector            
+-----------------------------------
+ '123':1 'one':1 'three':3 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+           to_tsvector           
+---------------------------------
+ '12':1 'four':3 'one':1 'two':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+      Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |               Dictionaries               
+-----------------+------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN NO MATCH THEN simple+
+                 | ELSE thesaurus                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector    
+------------------
+ '12':1 'books':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector   
+-----------------
+ '12':1 'book':2
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION operators_tst (
+						COPY=thesaurus_tst
+);
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A');
+                                     to_tsvector                                      
+--------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION (synonym, simple);
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A Postgres');
+                                                to_tsvector                                                
+-----------------------------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'pgsql':7 'postgr':7 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/expected/tsearch.out b/src/test/regress/expected/tsearch.out
index b088ff0..9ebf5b9 100644
--- a/src/test/regress/expected/tsearch.out
+++ b/src/test/regress/expected/tsearch.out
@@ -36,11 +36,11 @@ WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 -----+---------
 (0 rows)
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
- mapcfg | maptokentype | mapseqno 
---------+--------------+----------
+WHERE mapcfg = 0;
+ mapcfg | maptokentype 
+--------+--------------
 (0 rows)
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
@@ -51,8 +51,8 @@ RIGHT JOIN pg_ts_config_map AS m
     ON (tt.cfgid=m.mapcfg AND tt.tokid=m.maptokentype)
 WHERE
     tt.cfgid IS NULL OR tt.tokid IS NULL;
- cfgid | tokid | mapcfg | maptokentype | mapseqno | mapdict 
--------+-------+--------+--------------+----------+---------
+ cfgid | tokid | mapcfg | maptokentype | mapdicts 
+-------+-------+--------+--------------+----------
 (0 rows)
 
 -- test basic text search behavior without indexes, then with
@@ -567,55 +567,55 @@ SELECT length(to_tsvector('english', '345 qwe@efd.r '' http://www.com/ http://ae
 
 -- ts_debug
 SELECT * from ts_debug('english', '<myns:foo-bar_baz.blurfl>abc&nm1;def&#xa9;ghi&#245;jkl</myns:foo-bar_baz.blurfl>');
-   alias   |   description   |           token            |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+----------------------------+----------------+--------------+---------
- tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |              | 
- asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem | {abc}
- entity    | XML entity      | &nm1;                      | {}             |              | 
- asciiword | Word, all ASCII | def                        | {english_stem} | english_stem | {def}
- entity    | XML entity      | &#xa9;                     | {}             |              | 
- asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem | {ghi}
- entity    | XML entity      | &#245;                     | {}             |              | 
- asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem | {jkl}
- tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |              | 
+   alias   |   description   |           token            |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+----------------------------+----------------+---------------+--------------+---------
+ tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |               |              | 
+ asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem  | english_stem | {abc}
+ entity    | XML entity      | &nm1;                      | {}             |               |              | 
+ asciiword | Word, all ASCII | def                        | {english_stem} | english_stem  | english_stem | {def}
+ entity    | XML entity      | &#xa9;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem  | english_stem | {ghi}
+ entity    | XML entity      | &#245;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem  | english_stem | {jkl}
+ tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |               |              | 
 (9 rows)
 
 -- check parsing of URLs
 SELECT * from ts_debug('english', 'http://www.harewoodsolutions.co.uk/press.aspx</span>');
-  alias   |  description  |                 token                  | dictionaries | dictionary |                 lexemes                  
-----------+---------------+----------------------------------------+--------------+------------+------------------------------------------
- protocol | Protocol head | http://                                | {}           |            | 
- url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple     | {www.harewoodsolutions.co.uk/press.aspx}
- host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple     | {www.harewoodsolutions.co.uk}
- url_path | URL path      | /press.aspx                            | {simple}     | simple     | {/press.aspx}
- tag      | XML tag       | </span>                                | {}           |            | 
+  alias   |  description  |                 token                  | dictionaries | configuration | command |                 lexemes                  
+----------+---------------+----------------------------------------+--------------+---------------+---------+------------------------------------------
+ protocol | Protocol head | http://                                | {}           |               |         | 
+ url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk/press.aspx}
+ host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk}
+ url_path | URL path      | /press.aspx                            | {simple}     | simple        | simple  | {/press.aspx}
+ tag      | XML tag       | </span>                                | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
-  alias   |  description  |           token            | dictionaries | dictionary |           lexemes            
-----------+---------------+----------------------------+--------------+------------+------------------------------
- protocol | Protocol head | http://                    | {}           |            | 
- url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple     | {aew.wer0c.ewr/id?ad=qwe&dw}
- host     | Host          | aew.wer0c.ewr              | {simple}     | simple     | {aew.wer0c.ewr}
- url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple     | {/id?ad=qwe&dw}
- tag      | XML tag       | <span>                     | {}           |            | 
+  alias   |  description  |           token            | dictionaries | configuration | command |           lexemes            
+----------+---------------+----------------------------+--------------+---------------+---------+------------------------------
+ protocol | Protocol head | http://                    | {}           |               |         | 
+ url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple        | simple  | {aew.wer0c.ewr/id?ad=qwe&dw}
+ host     | Host          | aew.wer0c.ewr              | {simple}     | simple        | simple  | {aew.wer0c.ewr}
+ url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple        | simple  | {/id?ad=qwe&dw}
+ tag      | XML tag       | <span>                     | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
-  alias   |  description  |        token         | dictionaries | dictionary |        lexemes         
-----------+---------------+----------------------+--------------+------------+------------------------
- protocol | Protocol head | http://              | {}           |            | 
- url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple     | {5aew.werc.ewr:8100/?}
- host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path      | /?                   | {simple}     | simple     | {/?}
+  alias   |  description  |        token         | dictionaries | configuration | command |        lexemes         
+----------+---------------+----------------------+--------------+---------------+---------+------------------------
+ protocol | Protocol head | http://              | {}           |               |         | 
+ url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?}
+ host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path      | /?                   | {simple}     | simple        | simple  | {/?}
 (4 rows)
 
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
-  alias   | description |         token          | dictionaries | dictionary |         lexemes          
-----------+-------------+------------------------+--------------+------------+--------------------------
- url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple     | {5aew.werc.ewr:8100/?xx}
- host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path    | /?xx                   | {simple}     | simple     | {/?xx}
+  alias   | description |         token          | dictionaries | configuration | command |         lexemes          
+----------+-------------+------------------------+--------------+---------------+---------+--------------------------
+ url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?xx}
+ host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path    | /?xx                   | {simple}     | simple        | simple  | {/?xx}
 (3 rows)
 
 SELECT token, alias,
diff --git a/src/test/regress/sql/oidjoins.sql b/src/test/regress/sql/oidjoins.sql
index 656cace..4e6730f 100644
--- a/src/test/regress/sql/oidjoins.sql
+++ b/src/test/regress/sql/oidjoins.sql
@@ -545,10 +545,6 @@ SELECT	ctid, mapcfg
 FROM	pg_catalog.pg_ts_config_map fk
 WHERE	mapcfg != 0 AND
 	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_config pk WHERE pk.oid = fk.mapcfg);
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/sql/tsdicts.sql b/src/test/regress/sql/tsdicts.sql
index 1633c0d..080ddc4 100644
--- a/src/test/regress/sql/tsdicts.sql
+++ b/src/test/regress/sql/tsdicts.sql
@@ -117,6 +117,57 @@ CREATE TEXT SEARCH DICTIONARY thesaurus (
 
 SELECT ts_lexize('thesaurus', 'one');
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+
+SELECT to_tsvector('english_union', 'book');
+SELECT to_tsvector('english_union', 'books');
+SELECT to_tsvector('english_union', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+
+SELECT to_tsvector('english_intersect', 'book');
+SELECT to_tsvector('english_intersect', 'books');
+SELECT to_tsvector('english_intersect', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+
+SELECT to_tsvector('english_except', 'book');
+SELECT to_tsvector('english_except', 'books');
+SELECT to_tsvector('english_except', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+
+SELECT to_tsvector('english_branches', 'book');
+SELECT to_tsvector('english_branches', 'books');
+SELECT to_tsvector('english_branches', 'booking');
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -189,6 +240,50 @@ SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one');
 SELECT to_tsvector('thesaurus_tst', 'Supernovae star is very new star and usually called supernovae (abbreviation SN)');
 SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a tickets');
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
+CREATE TEXT SEARCH CONFIGURATION operators_tst (
+						COPY=thesaurus_tst
+);
+
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION (synonym, simple);
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A Postgres');
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/sql/tsearch.sql b/src/test/regress/sql/tsearch.sql
index 637bfb3..26d771b 100644
--- a/src/test/regress/sql/tsearch.sql
+++ b/src/test/regress/sql/tsearch.sql
@@ -26,9 +26,9 @@ SELECT oid, cfgname
 FROM pg_ts_config
 WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
+WHERE mapcfg = 0;
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
 SELECT * FROM
#27Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Aleksandr Parfenov (#26)
1 attachment(s)
Re: Flexible configuration for full-text search

Hello hackers,

A new version of the patch in the attachment. There are no changes since
the last version except refreshing it to current HEAD.

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

Attachments:

0001-flexible-fts-configuration-v13.patchtext/x-patchDownload
diff --git a/contrib/unaccent/expected/unaccent.out b/contrib/unaccent/expected/unaccent.out
index b93105e9c7..37b9337635 100644
--- a/contrib/unaccent/expected/unaccent.out
+++ b/contrib/unaccent/expected/unaccent.out
@@ -61,3 +61,14 @@ SELECT ts_lexize('unaccent', '
  {����}
 (1 row)
 
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
+         to_tsvector          
+------------------------------
+ 'foobar':1 '�����':2 '���':3
+(1 row)
+
diff --git a/contrib/unaccent/sql/unaccent.sql b/contrib/unaccent/sql/unaccent.sql
index 310213994f..7e118320b4 100644
--- a/contrib/unaccent/sql/unaccent.sql
+++ b/contrib/unaccent/sql/unaccent.sql
@@ -16,3 +16,12 @@ SELECT unaccent('unaccent', '
 SELECT ts_lexize('unaccent', 'foobar');
 SELECT ts_lexize('unaccent', '����');
 SELECT ts_lexize('unaccent', '����');
+
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml
index ebe0b94b27..ecc37044a9 100644
--- a/doc/src/sgml/ref/alter_tsconfig.sgml
+++ b/doc/src/sgml/ref/alter_tsconfig.sgml
@@ -21,8 +21,12 @@ PostgreSQL documentation
 
  <refsynopsisdiv>
 <synopsis>
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
@@ -88,6 +92,17 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
     </listitem>
    </varlistentry>
 
+   <varlistentry>
+    <term><replaceable class="parameter">config</replaceable></term>
+    <listitem>
+     <para>
+      The dictionaries tree expression. The dictionary expression
+      is a triple of condition/command/else that define way to process
+      the text. The <literal>ELSE</literal> part is optional.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry>
     <term><replaceable class="parameter">old_dictionary</replaceable></term>
     <listitem>
@@ -133,7 +148,7 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
      </para>
     </listitem>
    </varlistentry>
- </variablelist>
+  </variablelist>
 
   <para>
    The <literal>ADD MAPPING FOR</literal> form installs a list of dictionaries to be
@@ -154,6 +169,53 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
 
  </refsect1>
 
+ <refsect1>
+  <title>Dictionaries Map Configuration</title>
+
+  <refsect2>
+   <title>Format</title>
+   <para>
+    Formally <replaceable class="parameter">config</replaceable> is one of:
+   </para>
+   <programlisting>
+    * dictionary_name
+
+    * config { UNION | INTERSECT | EXCEPT | MAP } config
+
+    * CASE config
+        WHEN [ NO ] MATCH THEN { KEEP | config }
+        [ ELSE config ]
+      END
+   </programlisting>
+  </refsect2>
+
+  <refsect2>
+   <title>Description</title>
+   <para>
+    <replaceable class="parameter">config</replaceable> can be used
+    in three different formats. The most simple format is name of dictionary to
+    use for tokens processing.
+   </para>
+   <para>
+    In order to use more than one dictionary
+    simultaneously user should interconnect dictionaries by operators. Operators
+    <literal>UNION</literal>, <literal>EXCEPT</literal> and
+    <literal>INTERSECT</literal> have same meaning as in operations on sets.
+    Special operator <literal>MAP</literal> gets output of left subexpression
+    and uses it as an input to right subexpression.
+   </para>
+   <para>
+    The third format of <replaceable class="parameter">config</replaceable> is similar to
+    <literal>CASE/WHEN/THEN/ELSE</literal> structure. It's consists of three
+    replaceable parts. First one is configuration which is used to construct lexemes set
+    for matching condition. If the condition is triggered, the command is executed.
+    Use command <literal>KEEP</literal> to avoid repeating of the same
+    configuration in condition and command part. However, command may differ from
+    the condition. The <literal>ELSE</literal> branch is executed otherwise.
+   </para>
+  </refsect2>
+ </refsect1>
+
  <refsect1>
   <title>Examples</title>
 
@@ -167,6 +229,34 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
 ALTER TEXT SEARCH CONFIGURATION my_config
   ALTER MAPPING REPLACE english WITH swedish;
 </programlisting>
+
+  <para>
+   Next example shows how to analyse documents in both English and German languages.
+   <literal>english_hunspell</literal> and <literal>german_hunspell</literal>
+   return result only if a word is recognized. Otherwise, stemmer dictionaries
+   are used to process a token.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH
+   CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+    UNION
+   CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+    In order to combine search for both exact and processed forms the vector
+    should contain lexemes produced by <literal>simple</literal> for exact form
+    of the word as well as lexemes produced by linguistic-aware dictionary
+    (e.g. <literal>english_stem</literal>) for processed forms.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml
index 8075ea94e7..e02e329b15 100644
--- a/doc/src/sgml/textsearch.sgml
+++ b/doc/src/sgml/textsearch.sgml
@@ -732,10 +732,11 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     The <function>to_tsvector</function> function internally calls a parser
     which breaks the document text into tokens and assigns a type to
     each token.  For each token, a list of
-    dictionaries (<xref linkend="textsearch-dictionaries"/>) is consulted,
-    where the list can vary depending on the token type.  The first dictionary
-    that <firstterm>recognizes</firstterm> the token emits one or more normalized
-    <firstterm>lexemes</firstterm> to represent the token.  For example,
+    condition/command pairs is consulted, where the list can vary depending
+    on the token type, condition and command are expressions on dictionaries
+    with matching clause in condition(<xref linkend="textsearch-dictionaries"/>).
+    The first command combined with true-resulted condition emits one or more normalized
+    <firstterm>lexemes</firstterm> to represent the token. For example,
     <literal>rats</literal> became <literal>rat</literal> because one of the
     dictionaries recognized that the word <literal>rats</literal> is a plural
     form of <literal>rat</literal>.  Some words are recognized as
@@ -743,7 +744,7 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     causes them to be ignored since they occur too frequently to be useful in
     searching.  In our example these are
     <literal>a</literal>, <literal>on</literal>, and <literal>it</literal>.
-    If no dictionary in the list recognizes the token then it is also ignored.
+    If none of conditions is <literal>true</literal> the token is ignored.
     In this example that happened to the punctuation sign <literal>-</literal>
     because there are in fact no dictionaries assigned for its token type
     (<literal>Space symbols</literal>), meaning space tokens will never be
@@ -2312,8 +2313,8 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
      <para>
       a single lexeme with the <literal>TSL_FILTER</literal> flag set, to replace
       the original token with a new token to be passed to subsequent
-      dictionaries (a dictionary that does this is called a
-      <firstterm>filtering dictionary</firstterm>)
+      dictionaries in a comma-separated syntax (a dictionary that does this
+      is called a <firstterm>filtering dictionary</firstterm>)
      </para>
     </listitem>
     <listitem>
@@ -2345,38 +2346,126 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
    type that the parser can return, a separate list of dictionaries is
    specified by the configuration.  When a token of that type is found
    by the parser, each dictionary in the list is consulted in turn,
-   until some dictionary recognizes it as a known word.  If it is identified
-   as a stop word, or if no dictionary recognizes the token, it will be
-   discarded and not indexed or searched for.
-   Normally, the first dictionary that returns a non-<literal>NULL</literal>
-   output determines the result, and any remaining dictionaries are not
-   consulted; but a filtering dictionary can replace the given word
-   with a modified word, which is then passed to subsequent dictionaries.
+   until command is not selected based on its condition. If none of cases is
+   selected token will be discarded and not indexed or searched for.
   </para>
 
   <para>
-   The general rule for configuring a list of dictionaries
-   is to place first the most narrow, most specific dictionary, then the more
-   general dictionaries, finishing with a very general dictionary, like
+   A tree of cases is described as condition/command/else triples. Each
+   condition is evaluated in order to select appropriate command to generate
+   resulted set of lexemes.
+  </para>
+
+  <para>
+   A condition is an expression with dictionaries used as operands and
+   basic set operators <literal>UNION</literal>, <literal>EXCEPT</literal>, <literal>INTERSECT</literal>
+   and special operator <literal>MAP</literal>.
+   Special operator <literal>MAP</literal> use output of left subexpression as
+   input for right subexpression.
+  </para>
+
+  <para>
+    Rules to write command are same as for condition with additional keyword
+    <literal>KEEP</literal> considered to use the result of the condition as an output.
+  </para>
+
+  <para>
+   A comma-separated list of dictionaries is a simplified variant of text
+   search configuration. Each dictionary consulted to process a token and first
+   non-<literal>NULL</literal> output is accepted as a processing result.
+  </para>
+
+  <para>
+   The general rule for configuring tokens processing
+   is to place first case with the most narrow, most specific dictionary, then the more
+   general dictionaries, finishing with a very general dictionaries, like
    a <application>Snowball</application> stemmer or <literal>simple</literal>, which
    recognizes everything.  For example, for an astronomy-specific search
    (<literal>astro_en</literal> configuration) one could bind token type
    <type>asciiword</type> (ASCII word) to a synonym dictionary of astronomical
    terms, a general English dictionary and a <application>Snowball</application> English
-   stemmer:
+   stemmer in comma-separated variant of mapping:
+  </para>
 
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION astro_en
     ADD MAPPING FOR asciiword WITH astrosyn, english_ispell, english_stem;
 </programlisting>
+
+  <para>
+   Another example is a configuration for both English and German languages via
+   operator-separated variant of mapping:
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION multi_en_de
+    ADD MAPPING FOR asciiword, word WITH
+        CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+         UNION
+        CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+   This configuration provides an ability to search on collection of multilingual
+   documents without specifying language:
+  </para>
+
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'lack');
+ id |                   txt
+----+-----------------------------------------
+  2 | with old stars and lacking gas and dust
+
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'beginnen');
+ id |                 txt
+----+-------------------------------------
+  1 | Das geschah zu Beginn dieses Monats
+</programlisting>
+
+  <para>
+   A combination of stemmer dictionary with <literal>simple</literal> one may be used to mix
+   search for exact form of one word and linguistic search for others.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION exact_and_linguistic
+    ADD MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
+  <para>
+   In the following example a simple dictionary is used to prevent words from normalization in query.
   </para>
 
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Supernova star'),
+                              (2, 'Supernova stars'))
+SELECT * FROM docs WHERE to_tsvector('exact_and_linguistic', txt) @@ (to_tsquery('simple', 'stars') &amp;&amp; to_tsquery('english', 'supernovae'));
+ id |       txt       
+----+-----------------
+  2 | Supernova stars
+</programlisting>
+
+   <caution>
+    <para>
+     Due to lack of information about origin of each lexeme in <literal>tsvector</literal> may
+     lead to false-positive triggers in case of stemmed form being used as exact form in a query.
+    </para>
+   </caution>
+
   <para>
-   A filtering dictionary can be placed anywhere in the list, except at the
-   end where it'd be useless.  Filtering dictionaries are useful to partially
+   Filtering dictionaries are useful to partially
    normalize words to simplify the task of later dictionaries.  For example,
    a filtering dictionary could be used to remove accents from accented
    letters, as is done by the <xref linkend="unaccent"/> module.
+   Filter dictionary should be placed at left of <literal>MAP</literal>
+   operator. If filter dictionary returns <literal>NULL</literal> it pass initial token
+   to the right subexpression.
   </para>
 
   <sect2 id="textsearch-stopwords">
@@ -2543,9 +2632,9 @@ SELECT ts_lexize('public.simple_dict','The');
 
 <screen>
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | Paris | {english_stem} | english_stem  | english_stem | {pari}
 
 CREATE TEXT SEARCH DICTIONARY my_synonym (
     TEMPLATE = synonym,
@@ -2557,9 +2646,12 @@ ALTER TEXT SEARCH CONFIGURATION english
     WITH my_synonym, english_stem;
 
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |       dictionaries        | dictionary | lexemes 
------------+-----------------+-------+---------------------------+------------+---------
- asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris}
+   alias   |   description   | token |       dictionaries        |                configuration                |  command   | lexemes 
+-----------+-----------------+-------+---------------------------+---------------------------------------------+------------+---------
+ asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | CASE my_synonym WHEN MATCH THEN KEEP       +| my_synonym | {paris}
+           |                 |       |                           | ELSE CASE english_stem WHEN MATCH THEN KEEP+|            | 
+           |                 |       |                           | END                                        +|            | 
+           |                 |       |                           | END                                         |            | 
 </screen>
    </para>
 
@@ -3184,6 +3276,21 @@ CREATE TEXT SEARCH DICTIONARY english_ispell (
     Now we can set up the mappings for words in configuration
     <literal>pg</literal>:
 
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION pg
+    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
+                      word, hword, hword_part
+    WITH 
+      CASE pg_dict WHEN MATCH THEN KEEP
+      ELSE
+          CASE english_ispell WHEN MATCH THEN KEEP
+          ELSE english_stem
+          END
+      END;
+</programlisting>
+
+    Or use alternative comma-separated syntax:
+
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION pg
     ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
@@ -3263,7 +3370,8 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
          OUT <replaceable class="parameter">description</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">token</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">dictionaries</replaceable> <type>regdictionary[]</type>,
-         OUT <replaceable class="parameter">dictionary</replaceable> <type>regdictionary</type>,
+         OUT <replaceable class="parameter">configuration</replaceable> <type>text</type>,
+         OUT <replaceable class="parameter">command</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)
          returns setof record
 </synopsis>
@@ -3307,14 +3415,20 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionary</replaceable> <type>regdictionary</type> &mdash; the dictionary
-       that recognized the token, or <literal>NULL</literal> if none did
+       <replaceable>configuration</replaceable> <type>text</type> &mdash; the
+       configuration defined for this token type
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       <replaceable>command</replaceable> <type>text</type> &mdash; the command that describes
+       the way the output was produced
       </para>
      </listitem>
      <listitem>
       <para>
        <replaceable>lexemes</replaceable> <type>text[]</type> &mdash; the lexeme(s) produced
-       by the dictionary that recognized the token, or <literal>NULL</literal> if
+       by the command selected according conditions, or <literal>NULL</literal> if
        none did; an empty array (<literal>{}</literal>) means it was recognized as a
        stop word
       </para>
@@ -3327,32 +3441,32 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
 
 <screen>
 SELECT * FROM ts_debug('english','a fat  cat sat on a mat - it ate a fat rats');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | cat   | {english_stem} | english_stem | {cat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | sat   | {english_stem} | english_stem | {sat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | on    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | mat   | {english_stem} | english_stem | {mat}
- blank     | Space symbols   |       | {}             |              | 
- blank     | Space symbols   | -     | {}             |              | 
- asciiword | Word, all ASCII | it    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | ate   | {english_stem} | english_stem | {ate}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | rats  | {english_stem} | english_stem | {rat}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | cat   | {english_stem} | english_stem  | english_stem | {cat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | sat   | {english_stem} | english_stem  | english_stem | {sat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | on    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | mat   | {english_stem} | english_stem  | english_stem | {mat}
+ blank     | Space symbols   |       |                |               |              | 
+ blank     | Space symbols   | -     |                |               |              | 
+ asciiword | Word, all ASCII | it    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | ate   | {english_stem} | english_stem  | english_stem | {ate}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | rats  | {english_stem} | english_stem  | english_stem | {rat}
 </screen>
   </para>
 
@@ -3378,13 +3492,22 @@ ALTER TEXT SEARCH CONFIGURATION public.english
 
 <screen>
 SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
-   alias   |   description   |    token    |         dictionaries          |   dictionary   |   lexemes   
------------+-----------------+-------------+-------------------------------+----------------+-------------
- asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | english_ispell | {}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | english_ispell | {bright}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | english_stem   | {supernova}
+   alias   |   description   |    token    |         dictionaries          |                configuration                |     command      |   lexemes   
+-----------+-----------------+-------------+-------------------------------+---------------------------------------------+------------------+-------------
+ asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {bright}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_stem     | {supernova}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
 </screen>
 
   <para>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 8cd8bf40ac..66c07ae1b3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -948,55 +948,14 @@ GRANT SELECT (subdbid, subname, subowner, subenabled, subslotname, subpublicatio
 -- Tsearch debug function.  Defined here because it'd be pretty unwieldy
 -- to put it into pg_proc.h
 
-CREATE FUNCTION ts_debug(IN config regconfig, IN document text,
-    OUT alias text,
-    OUT description text,
-    OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
-    OUT lexemes text[])
-RETURNS SETOF record AS
-$$
-SELECT
-    tt.alias AS alias,
-    tt.description AS description,
-    parse.token AS token,
-    ARRAY ( SELECT m.mapdict::pg_catalog.regdictionary
-            FROM pg_catalog.pg_ts_config_map AS m
-            WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-            ORDER BY m.mapseqno )
-    AS dictionaries,
-    ( SELECT mapdict::pg_catalog.regdictionary
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS dictionary,
-    ( SELECT pg_catalog.ts_lexize(mapdict, parse.token)
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS lexemes
-FROM pg_catalog.ts_parse(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 ), $2
-    ) AS parse,
-     pg_catalog.ts_token_type(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 )
-    ) AS tt
-WHERE tt.tokid = parse.tokid
-$$
-LANGUAGE SQL STRICT STABLE PARALLEL SAFE;
-
-COMMENT ON FUNCTION ts_debug(regconfig,text) IS
-    'debug function for text search configuration';
 
 CREATE FUNCTION ts_debug(IN document text,
     OUT alias text,
     OUT description text,
     OUT token text,
     OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
+    OUT configuration text,
+    OUT command text,
     OUT lexemes text[])
 RETURNS SETOF record AS
 $$
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index 3a843512d1..53ee576223 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -39,9 +39,12 @@
 #include "nodes/makefuncs.h"
 #include "parser/parse_func.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_public.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/jsonb.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "utils/syscache.h"
@@ -935,11 +938,22 @@ makeConfigurationDependencies(HeapTuple tuple, bool removeOld,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			TSMapElement *mapdicts = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			Oid		   *dictionaryOids = TSMapGetDictionaries(mapdicts);
+			Oid		   *currentOid = dictionaryOids;
 
-			referenced.classId = TSDictionaryRelationId;
-			referenced.objectId = cfgmap->mapdict;
-			referenced.objectSubId = 0;
-			add_exact_object_address(&referenced, addrs);
+			while (*currentOid != InvalidOid)
+			{
+				referenced.classId = TSDictionaryRelationId;
+				referenced.objectId = *currentOid;
+				referenced.objectSubId = 0;
+				add_exact_object_address(&referenced, addrs);
+
+				currentOid++;
+			}
+
+			pfree(dictionaryOids);
+			TSMapElementFree(mapdicts);
 		}
 
 		systable_endscan(scan);
@@ -1091,8 +1105,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			mapvalues[Anum_pg_ts_config_map_mapcfg - 1] = cfgOid;
 			mapvalues[Anum_pg_ts_config_map_maptokentype - 1] = cfgmap->maptokentype;
-			mapvalues[Anum_pg_ts_config_map_mapseqno - 1] = cfgmap->mapseqno;
-			mapvalues[Anum_pg_ts_config_map_mapdict - 1] = cfgmap->mapdict;
+			mapvalues[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(&cfgmap->mapdicts);
 
 			newmaptup = heap_form_tuple(mapRel->rd_att, mapvalues, mapnulls);
 
@@ -1195,7 +1208,7 @@ AlterTSConfiguration(AlterTSConfigurationStmt *stmt)
 	relMap = heap_open(TSConfigMapRelationId, RowExclusiveLock);
 
 	/* Add or drop mappings */
-	if (stmt->dicts)
+	if (stmt->dicts || stmt->dict_map)
 		MakeConfigurationMapping(stmt, tup, relMap);
 	else if (stmt->tokentype)
 		DropConfigurationMapping(stmt, tup, relMap);
@@ -1270,6 +1283,59 @@ getTokenTypes(Oid prsId, List *tokennames)
 	return res;
 }
 
+/*
+ * Parse parse node extracted from dictionary mapping and transform it into
+ * internal representation of dictionary mapping.
+ */
+static TSMapElement *
+ParseTSMapConfig(DictMapElem *elem)
+{
+	TSMapElement *result = palloc0(sizeof(TSMapElement));
+
+	if (elem->kind == DICT_MAP_CASE)
+	{
+		TSMapCase  *caseObject = palloc0(sizeof(TSMapCase));
+		DictMapCase *caseASTObject = elem->data;
+
+		caseObject->condition = ParseTSMapConfig(caseASTObject->condition);
+		caseObject->command = ParseTSMapConfig(caseASTObject->command);
+
+		if (caseASTObject->elsebranch)
+			caseObject->elsebranch = ParseTSMapConfig(caseASTObject->elsebranch);
+
+		caseObject->match = caseASTObject->match;
+
+		caseObject->condition->parent = result;
+		caseObject->command->parent = result;
+
+		result->type = TSMAP_CASE;
+		result->value.objectCase = caseObject;
+	}
+	else if (elem->kind == DICT_MAP_EXPRESSION)
+	{
+		TSMapExpression *expression = palloc0(sizeof(TSMapExpression));
+		DictMapExprElem *expressionAST = elem->data;
+
+		expression->left = ParseTSMapConfig(expressionAST->left);
+		expression->right = ParseTSMapConfig(expressionAST->right);
+		expression->operator = expressionAST->oper;
+
+		result->type = TSMAP_EXPRESSION;
+		result->value.objectExpression = expression;
+	}
+	else if (elem->kind == DICT_MAP_KEEP)
+	{
+		result->value.objectExpression = NULL;
+		result->type = TSMAP_KEEP;
+	}
+	else if (elem->kind == DICT_MAP_DICTIONARY)
+	{
+		result->value.objectDictionary = get_ts_dict_oid(elem->data, false);
+		result->type = TSMAP_DICTIONARY;
+	}
+	return result;
+}
+
 /*
  * ALTER TEXT SEARCH CONFIGURATION ADD/ALTER MAPPING
  */
@@ -1286,8 +1352,9 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	Oid			prsId;
 	int		   *tokens,
 				ntoken;
-	Oid		   *dictIds;
-	int			ndict;
+	Oid		   *dictIds = NULL;
+	int			ndict = 0;
+	TSMapElement *config = NULL;
 	ListCell   *c;
 
 	prsId = ((Form_pg_ts_config) GETSTRUCT(tup))->cfgparser;
@@ -1326,15 +1393,18 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	/*
 	 * Convert list of dictionary names to array of dict OIDs
 	 */
-	ndict = list_length(stmt->dicts);
-	dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
-	i = 0;
-	foreach(c, stmt->dicts)
+	if (stmt->dicts)
 	{
-		List	   *names = (List *) lfirst(c);
+		ndict = list_length(stmt->dicts);
+		dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
+		i = 0;
+		foreach(c, stmt->dicts)
+		{
+			List	   *names = (List *) lfirst(c);
 
-		dictIds[i] = get_ts_dict_oid(names, false);
-		i++;
+			dictIds[i] = get_ts_dict_oid(names, false);
+			i++;
+		}
 	}
 
 	if (stmt->replace)
@@ -1356,6 +1426,10 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			Datum		repl_val[Natts_pg_ts_config_map];
+			bool		repl_null[Natts_pg_ts_config_map];
+			bool		repl_repl[Natts_pg_ts_config_map];
+			HeapTuple	newtup;
 
 			/*
 			 * check if it's one of target token types
@@ -1379,25 +1453,21 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 			/*
 			 * replace dictionary if match
 			 */
-			if (cfgmap->mapdict == dictOld)
-			{
-				Datum		repl_val[Natts_pg_ts_config_map];
-				bool		repl_null[Natts_pg_ts_config_map];
-				bool		repl_repl[Natts_pg_ts_config_map];
-				HeapTuple	newtup;
-
-				memset(repl_val, 0, sizeof(repl_val));
-				memset(repl_null, false, sizeof(repl_null));
-				memset(repl_repl, false, sizeof(repl_repl));
-
-				repl_val[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictNew);
-				repl_repl[Anum_pg_ts_config_map_mapdict - 1] = true;
-
-				newtup = heap_modify_tuple(maptup,
-										   RelationGetDescr(relMap),
-										   repl_val, repl_null, repl_repl);
-				CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
-			}
+			config = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			TSMapReplaceDictionary(config, dictOld, dictNew);
+
+			memset(repl_val, 0, sizeof(repl_val));
+			memset(repl_null, false, sizeof(repl_null));
+			memset(repl_repl, false, sizeof(repl_repl));
+
+			repl_val[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
+			repl_repl[Anum_pg_ts_config_map_mapdicts - 1] = true;
+
+			newtup = heap_modify_tuple(maptup,
+									   RelationGetDescr(relMap),
+									   repl_val, repl_null, repl_repl);
+			CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
+			pfree(config);
 		}
 
 		systable_endscan(scan);
@@ -1407,24 +1477,22 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		/*
 		 * Insertion of new entries
 		 */
+		config = ParseTSMapConfig(stmt->dict_map);
+
 		for (i = 0; i < ntoken; i++)
 		{
-			for (j = 0; j < ndict; j++)
-			{
-				Datum		values[Natts_pg_ts_config_map];
-				bool		nulls[Natts_pg_ts_config_map];
+			Datum		values[Natts_pg_ts_config_map];
+			bool		nulls[Natts_pg_ts_config_map];
 
-				memset(nulls, false, sizeof(nulls));
-				values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
-				values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
-				values[Anum_pg_ts_config_map_mapseqno - 1] = Int32GetDatum(j + 1);
-				values[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictIds[j]);
+			memset(nulls, false, sizeof(nulls));
+			values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
+			values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
+			values[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
 
-				tup = heap_form_tuple(relMap->rd_att, values, nulls);
-				CatalogTupleInsert(relMap, tup);
+			tup = heap_form_tuple(relMap->rd_att, values, nulls);
+			CatalogTupleInsert(relMap, tup);
 
-				heap_freetuple(tup);
-			}
+			heap_freetuple(tup);
 		}
 	}
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 1c12075b01..302a788a07 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4444,6 +4444,42 @@ _copyReassignOwnedStmt(const ReassignOwnedStmt *from)
 	return newnode;
 }
 
+static DictMapElem *
+_copyDictMapElem(const DictMapElem *from)
+{
+	DictMapElem *newnode = makeNode(DictMapElem);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(data);
+
+	return newnode;
+}
+
+static DictMapExprElem *
+_copyDictMapExprElem(const DictMapExprElem *from)
+{
+	DictMapExprElem *newnode = makeNode(DictMapExprElem);
+
+	COPY_NODE_FIELD(left);
+	COPY_NODE_FIELD(right);
+	COPY_SCALAR_FIELD(oper);
+
+	return newnode;
+}
+
+static DictMapCase *
+_copyDictMapCase(const DictMapCase *from)
+{
+	DictMapCase *newnode = makeNode(DictMapCase);
+
+	COPY_NODE_FIELD(condition);
+	COPY_NODE_FIELD(command);
+	COPY_NODE_FIELD(elsebranch);
+	COPY_SCALAR_FIELD(match);
+
+	return newnode;
+}
+
 static AlterTSDictionaryStmt *
 _copyAlterTSDictionaryStmt(const AlterTSDictionaryStmt *from)
 {
@@ -5457,6 +5493,15 @@ copyObjectImpl(const void *from)
 		case T_ReassignOwnedStmt:
 			retval = _copyReassignOwnedStmt(from);
 			break;
+		case T_DictMapExprElem:
+			retval = _copyDictMapExprElem(from);
+			break;
+		case T_DictMapElem:
+			retval = _copyDictMapElem(from);
+			break;
+		case T_DictMapCase:
+			retval = _copyDictMapCase(from);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _copyAlterTSDictionaryStmt(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 6a971d0141..ac642f64ab 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2181,6 +2181,36 @@ _equalReassignOwnedStmt(const ReassignOwnedStmt *a, const ReassignOwnedStmt *b)
 	return true;
 }
 
+static bool
+_equalDictMapElem(const DictMapElem *a, const DictMapElem *b)
+{
+	COMPARE_NODE_FIELD(data);
+	COMPARE_SCALAR_FIELD(kind);
+
+	return true;
+}
+
+static bool
+_equalDictMapExprElem(const DictMapExprElem *a, const DictMapExprElem *b)
+{
+	COMPARE_NODE_FIELD(left);
+	COMPARE_NODE_FIELD(right);
+	COMPARE_SCALAR_FIELD(oper);
+
+	return true;
+}
+
+static bool
+_equalDictMapCase(const DictMapCase *a, const DictMapCase *b)
+{
+	COMPARE_NODE_FIELD(condition);
+	COMPARE_NODE_FIELD(command);
+	COMPARE_NODE_FIELD(elsebranch);
+	COMPARE_SCALAR_FIELD(match);
+
+	return true;
+}
+
 static bool
 _equalAlterTSDictionaryStmt(const AlterTSDictionaryStmt *a, const AlterTSDictionaryStmt *b)
 {
@@ -3531,6 +3561,15 @@ equal(const void *a, const void *b)
 		case T_ReassignOwnedStmt:
 			retval = _equalReassignOwnedStmt(a, b);
 			break;
+		case T_DictMapExprElem:
+			retval = _equalDictMapExprElem(a, b);
+			break;
+		case T_DictMapElem:
+			retval = _equalDictMapElem(a, b);
+			break;
+		case T_DictMapCase:
+			retval = _equalDictMapCase(a, b);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _equalAlterTSDictionaryStmt(a, b);
 			break;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 90dfac2cb1..fd2ef8def7 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -52,6 +52,7 @@
 #include "catalog/namespace.h"
 #include "catalog/pg_am.h"
 #include "catalog/pg_trigger.h"
+#include "catalog/pg_ts_config_map.h"
 #include "commands/defrem.h"
 #include "commands/trigger.h"
 #include "nodes/makefuncs.h"
@@ -241,6 +242,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionSpec		*partspec;
 	PartitionBoundSpec	*partboundspec;
 	RoleSpec			*rolespec;
+	DictMapElem			*dmapelem;
 }
 
 %type <node>	stmt schema_stmt
@@ -309,7 +311,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				analyze_option_list analyze_option_elem
 %type <boolean>	opt_or_replace
 				opt_grant_grant_option opt_grant_admin_option
-				opt_nowait opt_if_exists opt_with_data
+				opt_nowait opt_if_exists opt_with_data opt_dictionary_map_no
 %type <ival>	opt_nowait_or_skip
 
 %type <list>	OptRoleList AlterOptRoleList
@@ -584,6 +586,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>		partbound_datum PartitionRangeDatum
 %type <list>		hash_partbound partbound_datum_list range_datum_list
 %type <defelt>		hash_partbound_elem
+%type <ival>		dictionary_map_set_expr_operator
+%type <dmapelem>	dictionary_map_dict dictionary_map_command_expr_paren
+					dictionary_config dictionary_map_case
+					dictionary_map_action opt_dictionary_map_case_else
+					dictionary_config_comma
 
 /*
  * Non-keyword token types.  These are hard-wired into the "flex" lexer.
@@ -646,13 +653,14 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	JOIN
 
-	KEY
+	KEEP KEY
 
 	LABEL LANGUAGE LARGE_P LAST_P LATERAL_P
 	LEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL
 	LOCALTIME LOCALTIMESTAMP LOCATION LOCK_P LOCKED LOGGED
 
-	MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE MONTH_P MOVE
+	MAP MAPPING MATCH MATCHED MATERIALIZED MAXVALUE MERGE METHOD
+	MINUTE_P MINVALUE MODE MONTH_P MOVE
 
 	NAME_P NAMES NATIONAL NATURAL NCHAR NEW NEXT NO NONE
 	NOT NOTHING NOTIFY NOTNULL NOWAIT NULL_P NULLIF
@@ -10370,24 +10378,26 @@ AlterTSDictionaryStmt:
 		;
 
 AlterTSConfigurationStmt:
-			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with any_name_list
+			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ADD_MAPPING;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = false;
 					n->replace = false;
 					$$ = (Node*)n;
 				}
-			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with any_name_list
+			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ALTER_MAPPING_FOR_TOKEN;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = true;
 					n->replace = false;
 					$$ = (Node*)n;
@@ -10439,6 +10449,100 @@ any_with:	WITH									{}
 			| WITH_LA								{}
 		;
 
+opt_dictionary_map_no:
+			NO { $$ = true; }
+			| { $$ = false; }
+		;
+
+dictionary_config_comma:
+			dictionary_map_dict { $$ = $1; }
+			| dictionary_map_dict ',' dictionary_config_comma
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = TSMAP_OP_COMMA;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_action:
+			KEEP
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_KEEP;
+				n->data = NULL;
+				$$ = n;
+			}
+			| dictionary_config { $$ = $1; }
+		;
+
+opt_dictionary_map_case_else:
+			ELSE dictionary_config { $$ = $2; }
+			| { $$ = NULL; }
+		;
+
+dictionary_map_case:
+			CASE dictionary_config WHEN opt_dictionary_map_no MATCH THEN dictionary_map_action opt_dictionary_map_case_else END_P
+			{
+				DictMapCase *n = makeNode(DictMapCase);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->condition = $2;
+				n->command = $7;
+				n->elsebranch = $8;
+				n->match = !$4;
+
+				r->kind = DICT_MAP_CASE;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_set_expr_operator:
+			UNION { $$ = TSMAP_OP_UNION; }
+			| EXCEPT { $$ = TSMAP_OP_EXCEPT; }
+			| INTERSECT { $$ = TSMAP_OP_INTERSECT; }
+			| MAP { $$ = TSMAP_OP_MAP; }
+		;
+
+dictionary_config:
+			dictionary_map_command_expr_paren { $$ = $1; }
+			| dictionary_config dictionary_map_set_expr_operator dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = $2;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_command_expr_paren:
+			'(' dictionary_config ')'	{ $$ = $2; }
+			| dictionary_map_case			{ $$ = $1; }
+			| dictionary_config_comma		{ $$ = $1; }
+		;
+
+dictionary_map_dict:
+			any_name
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_DICTIONARY;
+				n->data = $1;
+				$$ = n;
+			}
+		;
 
 /*****************************************************************************
  *
@@ -15129,6 +15233,7 @@ unreserved_keyword:
 			| LOCK_P
 			| LOCKED
 			| LOGGED
+			| MAP
 			| MAPPING
 			| MATCH
 			| MATERIALIZED
@@ -15435,6 +15540,7 @@ reserved_keyword:
 			| INITIALLY
 			| INTERSECT
 			| INTO
+			| KEEP
 			| LATERAL_P
 			| LEADING
 			| LIMIT
diff --git a/src/backend/tsearch/Makefile b/src/backend/tsearch/Makefile
index 227468ae9e..e61ad4fa1d 100644
--- a/src/backend/tsearch/Makefile
+++ b/src/backend/tsearch/Makefile
@@ -26,7 +26,7 @@ DICTFILES_PATH=$(addprefix dicts/,$(DICTFILES))
 OBJS = ts_locale.o ts_parse.o wparser.o wparser_def.o dict.o \
 	dict_simple.o dict_synonym.o dict_thesaurus.o \
 	dict_ispell.o regis.o spell.o \
-	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o
+	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o ts_configmap.o
 
 include $(top_srcdir)/src/backend/common.mk
 
diff --git a/src/backend/tsearch/ts_configmap.c b/src/backend/tsearch/ts_configmap.c
new file mode 100644
index 0000000000..714f2a8ab2
--- /dev/null
+++ b/src/backend/tsearch/ts_configmap.c
@@ -0,0 +1,1114 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.c
+ *		internal representation of text search configuration and utilities for it
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/tsearch/ts_confimap.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include <ctype.h>
+
+#include "access/heapam.h"
+#include "access/genam.h"
+#include "access/htup_details.h"
+#include "access/sysattr.h"
+#include "catalog/indexing.h"
+#include "catalog/pg_ts_dict.h"
+#include "catalog/pg_namespace.h"
+#include "catalog/namespace.h"
+#include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "utils/fmgroids.h"
+
+/*
+ * Size selected arbitrary, based on assumption that 1024 frames of stack
+ * is enough for parsing of configurations
+ */
+#define JSONB_PARSE_STATE_STACK_SIZE 1024
+
+/*
+ * Used during the parsing of TSMapElement from JSONB into internal
+ * data structures.
+ */
+typedef enum TSMapParseState
+{
+	TSMPS_WAIT_ELEMENT,
+	TSMPS_READ_DICT_OID,
+	TSMPS_READ_COMPLEX_OBJ,
+	TSMPS_READ_EXPRESSION,
+	TSMPS_READ_CASE,
+	TSMPS_READ_OPERATOR,
+	TSMPS_READ_COMMAND,
+	TSMPS_READ_CONDITION,
+	TSMPS_READ_ELSEBRANCH,
+	TSMPS_READ_MATCH,
+	TSMPS_READ_KEEP,
+	TSMPS_READ_LEFT,
+	TSMPS_READ_RIGHT
+} TSMapParseState;
+
+/*
+ * Context used during JSONB parsing to construct a TSMap
+ */
+typedef struct TSMapJsonbParseData
+{
+	TSMapParseState states[JSONB_PARSE_STATE_STACK_SIZE];	/* Stack of states of
+															 * JSONB parsing
+															 * automaton */
+	int			statesIndex;	/* Index of current stack frame */
+	TSMapElement *element;		/* Element that is in construction now */
+} TSMapJsonbParseData;
+
+static JsonbValue *TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState);
+static TSMapElement * JsonbToTSMapElement(JsonbContainer *root);
+
+/*
+ * Print name of the namespace into StringInfo variable result
+ */
+static void
+TSMapPrintNamespace(Oid  namespaceId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_namespace namespace;
+
+	if (false)
+		return;
+
+	maprel = heap_open(NamespaceRelationId, AccessShareLock);
+	mapidx = index_open(NamespaceOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(namespaceId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	namespace = (Form_pg_namespace) GETSTRUCT(maptup);
+	appendStringInfoString(result, namespace->nspname.data);
+	appendStringInfoChar(result, '.');
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print name of the dictionary into StringInfo variable result
+ */
+void
+TSMapPrintDictName(Oid dictId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_ts_dict dict;
+
+	if (false)
+		return;
+maprel = heap_open(TSDictionaryRelationId, AccessShareLock);
+	mapidx = index_open(TSDictionaryOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(dictId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	dict = (Form_pg_ts_dict) GETSTRUCT(maptup);
+	if (!TSDictionaryIsVisible(dictId))
+	{
+		TSMapPrintNamespace(dict->dictnamespace, result);
+	}
+	appendStringInfoString(result, dict->dictname.data);
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print the expression into StringInfo variable result
+ */
+static void
+TSMapPrintExpression(TSMapExpression *expression, StringInfo result)
+{
+
+	Assert(expression->left);
+	if (expression->left->type == TSMAP_EXPRESSION &&
+		expression->left->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, '(');
+	}
+	TSMapPrintElement(expression->left, result);
+	if (expression->left->type == TSMAP_EXPRESSION &&
+		expression->left->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, ')');
+	}
+
+	switch (expression->operator)
+	{
+		case TSMAP_OP_UNION:
+			appendStringInfoString(result, " UNION ");
+			break;
+		case TSMAP_OP_EXCEPT:
+			appendStringInfoString(result, " EXCEPT ");
+			break;
+		case TSMAP_OP_INTERSECT:
+			appendStringInfoString(result, " INTERSECT ");
+			break;
+		case TSMAP_OP_COMMA:
+			appendStringInfoString(result, ", ");
+			break;
+		case TSMAP_OP_MAP:
+			appendStringInfoString(result, " MAP ");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains invalid expression operator.")));
+			break;
+	}
+
+	Assert(expression->right);
+	if (expression->right->type == TSMAP_EXPRESSION &&
+		expression->right->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, '(');
+	}
+	TSMapPrintElement(expression->right, result);
+	if (expression->right->type == TSMAP_EXPRESSION &&
+		expression->right->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, ')');
+	}
+}
+
+/*
+ * Print the case configuration construction into StringInfo variable result
+ */
+static void
+TSMapPrintCase(TSMapCase *caseObject, StringInfo result)
+{
+	appendStringInfoString(result, "CASE ");
+
+	TSMapPrintElement(caseObject->condition, result);
+
+	appendStringInfoString(result, " WHEN ");
+	if (!caseObject->match)
+		appendStringInfoString(result, "NO ");
+	appendStringInfoString(result, "MATCH THEN ");
+
+	TSMapPrintElement(caseObject->command, result);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		appendStringInfoString(result, "\nELSE ");
+		TSMapPrintElement(caseObject->elsebranch, result);
+	}
+	appendStringInfoString(result, "\nEND");
+}
+
+/*
+ * Print the element into StringInfo result.
+ * Uses other function and serves for element type detection.
+ */
+void
+TSMapPrintElement(TSMapElement *element, StringInfo result)
+{
+	switch (element->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapPrintExpression(element->value.objectExpression, result);
+			break;
+		case TSMAP_DICTIONARY:
+			TSMapPrintDictName(element->value.objectDictionary, result);
+			break;
+		case TSMAP_CASE:
+			TSMapPrintCase(element->value.objectCase, result);
+			break;
+		case TSMAP_KEEP:
+			appendStringInfoString(result, "KEEP");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains elements with invalid type.")));
+			break;
+	}
+}
+
+/*
+ * Print the text search configuration as a text.
+ */
+Datum
+dictionary_mapping_to_text(PG_FUNCTION_ARGS)
+{
+	Oid			cfgOid = PG_GETARG_OID(0);
+	int32		tokentype = PG_GETARG_INT32(1);
+	StringInfo	rawResult;
+	text	   *result = NULL;
+	TSConfigCacheEntry *cacheEntry;
+
+	cacheEntry = lookup_ts_config_cache(cfgOid);
+	rawResult = makeStringInfo();
+	initStringInfo(rawResult);
+
+	if (cacheEntry->lenmap > tokentype && cacheEntry->map[tokentype] != NULL)
+	{
+		TSMapElement *element = cacheEntry->map[tokentype];
+
+		TSMapPrintElement(element, rawResult);
+	}
+
+	result = cstring_to_text(rawResult->data);
+	pfree(rawResult);
+	PG_RETURN_TEXT_P(result);
+}
+
+/* ----------------
+ * Functions used to convert TSMap structure into JSONB representation
+ * ----------------
+ */
+
+/*
+ * Convert an integer value into JsonbValue
+ */
+static JsonbValue *
+IntToJsonbValue(int intValue)
+{
+	char		buffer[16];
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	/*
+	 * String size is based on limit of int capacity up to 12 chars with sign
+	 * and NULL-character
+	 */
+	memset(buffer, 0, sizeof(char) * 12);
+
+	pg_ltoa(intValue, buffer);
+	value->type = jbvNumeric;
+	value->val.numeric = DatumGetNumeric(DirectFunctionCall3(numeric_in,
+															 CStringGetDatum(buffer),
+															 ObjectIdGetDatum(InvalidOid),
+															 Int32GetDatum(-1)
+															 ));
+	return value;
+}
+
+/*
+ * Convert a FTS configuration expression into JsonbValue
+ */
+static JsonbValue *
+TSMapExpressionToJsonbValue(TSMapExpression *expression, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("operator");
+	key.val.string.val = "operator";
+	value = IntToJsonbValue(expression->operator);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("left");
+	key.val.string.val = "left";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->left, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("right");
+	key.val.string.val = "right";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->right, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS configuration case into JsonbValue
+ */
+static JsonbValue *
+TSMapCaseToJsonbValue(TSMapCase *caseObject, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("condition");
+	key.val.string.val = "condition";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->condition, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("command");
+	key.val.string.val = "command";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->command, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		key.type = jbvString;
+		key.val.string.len = strlen("elsebranch");
+		key.val.string.val = "elsebranch";
+
+		pushJsonbValue(&jsonbState, WJB_KEY, &key);
+		value = TSMapElementToJsonbValue(caseObject->elsebranch, jsonbState);
+
+		if (value && IsAJsonbScalar(value))
+			pushJsonbValue(&jsonbState, WJB_VALUE, value);
+	}
+
+	key.type = jbvString;
+	key.val.string.len = strlen("match");
+	key.val.string.val = "match";
+
+	value = IntToJsonbValue(caseObject->match ? 1 : 0);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS KEEP command into JsonbValue
+ */
+static JsonbValue *
+TSMapKeepToJsonbValue(JsonbParseState *jsonbState)
+{
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	value->type = jbvString;
+	value->val.string.len = strlen("keep");
+	value->val.string.val = "keep";
+
+	return pushJsonbValue(&jsonbState, WJB_VALUE, value);
+}
+
+/*
+ * Convert a FTS element into JsonbValue. Common point for all types of TSMapElement
+ */
+JsonbValue *
+TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState)
+{
+	JsonbValue *result = NULL;
+
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_EXPRESSION:
+				result = TSMapExpressionToJsonbValue(element->value.objectExpression, jsonbState);
+				break;
+			case TSMAP_DICTIONARY:
+				result = IntToJsonbValue(element->value.objectDictionary);
+				break;
+			case TSMAP_CASE:
+				result = TSMapCaseToJsonbValue(element->value.objectCase, jsonbState);
+				break;
+			case TSMAP_KEEP:
+				result = TSMapKeepToJsonbValue(jsonbState);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Required text search configuration contains elements with invalid type.")));
+				break;
+		}
+	}
+	return result;
+}
+
+/*
+ * Convert a FTS configuration into JSONB
+ */
+Jsonb *
+TSMapToJsonb(TSMapElement *element)
+{
+	JsonbParseState *jsonbState = NULL;
+	JsonbValue *out;
+	Jsonb	   *result;
+
+	out = TSMapElementToJsonbValue(element, jsonbState);
+
+	result = JsonbValueToJsonb(out);
+	return result;
+}
+
+/* ----------------
+ * Functions used to get TSMap structure from JSONB representation
+ * ----------------
+ */
+
+/*
+ * Extract an integer from JsonbValue
+ */
+static int
+JsonbValueToInt(JsonbValue *value)
+{
+	char	   *str;
+
+	str = DatumGetCString(DirectFunctionCall1(numeric_out, NumericGetDatum(value->val.numeric)));
+	return pg_atoi(str, sizeof(int), 0);
+}
+
+/*
+ * Check is a key one of FTS configuration case fields
+ */
+static bool
+IsTSMapCaseKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "match") == 0 || strcmp(key, "condition") == 0 || strcmp(key, "command") == 0 || strcmp(key, "elsebranch") == 0;
+}
+
+/*
+ * Check is a key one of FTS configuration expression fields
+ */
+static bool
+IsTSMapExpressionKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "operator") == 0 || strcmp(key, "left") == 0 || strcmp(key, "right") == 0;
+}
+
+/*
+ * Configure parseData->element according to value (key)
+ */
+static void
+JsonbBeginObjectKey(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *parentElement = parseData->element;
+
+	parseData->element = palloc0(sizeof(TSMapElement));
+	parseData->element->parent = parentElement;
+
+	/* Overwrite object-type state based on key */
+	if (IsTSMapExpressionKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_EXPRESSION;
+		parseData->element->type = TSMAP_EXPRESSION;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapExpression));
+	}
+	else if (IsTSMapCaseKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CASE;
+		parseData->element->type = TSMAP_CASE;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapCase));
+	}
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration expression
+ */
+static void
+JsonbKeyExpressionProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "operator") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_OPERATOR;
+	else if (strcmp(key, "left") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_LEFT;
+	else if (strcmp(key, "right") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_RIGHT;
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration case
+ */
+static void
+JsonbKeyCaseProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "condition") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CONDITION;
+	else if (strcmp(key, "command") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_COMMAND;
+	else if (strcmp(key, "elsebranch") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_ELSEBRANCH;
+	else if (strcmp(key, "match") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_MATCH;
+}
+
+/*
+ * Convert a JsonbValue into OID TSMapElement
+ */
+static TSMapElement *
+JsonbValueToOidElement(JsonbValue *value, TSMapElement *parent)
+{
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	element->type = TSMAP_DICTIONARY;
+	element->value.objectDictionary = JsonbValueToInt(value);
+	return element;
+}
+
+/*
+ * Convert a JsonbValue into string TSMapElement.
+ * Used for special values such as KEEP command
+ */
+static TSMapElement *
+JsonbValueReadString(JsonbValue *value, TSMapElement *parent)
+{
+	char	   *str;
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	str = palloc0(sizeof(char) * (value->val.string.len + 1));
+	memcpy(str, value->val.string.val, sizeof(char) * value->val.string.len);
+
+	if (strcmp(str, "keep") == 0)
+		element->type = TSMAP_KEEP;
+
+	pfree(str);
+
+	return element;
+}
+
+/*
+ * Process a JsonbValue object
+ */
+static void
+JsonbProcessElement(JsonbIteratorToken r, JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *element = NULL;
+
+	switch (r)
+	{
+		case WJB_KEY:
+
+			/*
+			 * Construct an TSMapElement object. At first key inside JSONB
+			 * object a type is selected based on key.
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMPLEX_OBJ)
+				JsonbBeginObjectKey(value, parseData);
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_EXPRESSION)
+				JsonbKeyExpressionProcessing(value, parseData);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CASE)
+				JsonbKeyCaseProcessing(value, parseData);
+
+			break;
+		case WJB_BEGIN_OBJECT:
+
+			/*
+			 * Begin construction of new object
+			 */
+			parseData->statesIndex++;
+			parseData->states[parseData->statesIndex] = TSMPS_READ_COMPLEX_OBJ;
+			break;
+		case WJB_END_OBJECT:
+
+			/*
+			 * Save constructed object based on current state of parser
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->parent->value.objectExpression->left = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->parent->value.objectExpression->right = parseData->element;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->parent->value.objectCase->condition = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->parent->value.objectCase->command = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->parent->value.objectCase->elsebranch = parseData->element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_VALUE:
+
+			/*
+			 * Save a value inside constructing object
+			 */
+			if (value.type == jbvBinary)
+				element = JsonbToTSMapElement(value.val.binary.data);
+			else if (value.type == jbvString)
+				element = JsonbValueReadString(&value, parseData->element);
+			else if (value.type == jbvNumeric)
+				element = JsonbValueToOidElement(&value, parseData->element);
+			else
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains object with invalid type.")));
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->value.objectCase->condition = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->value.objectCase->command = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->value.objectCase->elsebranch = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_MATCH)
+				parseData->element->value.objectCase->match = JsonbValueToInt(&value) == 1 ? true : false;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_OPERATOR)
+				parseData->element->value.objectExpression->operator = JsonbValueToInt(&value);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->value.objectExpression->left = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->value.objectExpression->right = element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_ELEM:
+
+			/*
+			 * Store a simple element such as dictionary OID
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_WAIT_ELEMENT)
+			{
+				if (parseData->element != NULL)
+					parseData->element = JsonbValueToOidElement(&value, parseData->element->parent);
+				else
+					parseData->element = JsonbValueToOidElement(&value, NULL);
+			}
+			break;
+		default:
+			/* Ignore unused JSONB tokens */
+			break;
+	}
+}
+
+/*
+ * Convert a JsonbContainer into TSMapElement
+ */
+static TSMapElement *
+JsonbToTSMapElement(JsonbContainer *root)
+{
+	TSMapJsonbParseData parseData;
+	JsonbIteratorToken r;
+	JsonbIterator *it;
+	JsonbValue	val;
+
+	parseData.statesIndex = 0;
+	parseData.states[parseData.statesIndex] = TSMPS_WAIT_ELEMENT;
+	parseData.element = NULL;
+
+	it = JsonbIteratorInit(root);
+
+	while ((r = JsonbIteratorNext(&it, &val, true)) != WJB_DONE)
+		JsonbProcessElement(r, val, &parseData);
+
+	return parseData.element;
+}
+
+/*
+ * Convert a JSONB into TSMapElement
+ */
+TSMapElement *
+JsonbToTSMap(Jsonb *json)
+{
+	JsonbContainer *root = &json->root;
+
+	return JsonbToTSMapElement(root);
+}
+
+/* ----------------
+ * Text Search Configuration Map Utils
+ * ----------------
+ */
+
+/*
+ * Dynamically extendable list of OIDs
+ */
+typedef struct OidList
+{
+	Oid		   *data;
+	int			size;			/* Size of data array. Uninitialized elements
+								 * in data filled with InvalidOid */
+} OidList;
+
+/*
+ * Initialize a list
+ */
+static OidList *
+OidListInit()
+{
+	OidList    *result = palloc0(sizeof(OidList));
+
+	result->size = 1;
+	result->data = palloc0(result->size * sizeof(Oid));
+	result->data[0] = InvalidOid;
+	return result;
+}
+
+/*
+ * Add a new OID into list. If it is already stored in list, it won't be add second time.
+ */
+static void
+OidListAdd(OidList *list, Oid oid)
+{
+	int			i;
+
+	/* Search for the Oid in the list */
+	for (i = 0; list->data[i] != InvalidOid; i++)
+		if (list->data[i] == oid)
+			return;
+
+	/* If not found, insert it in the end of the list */
+	if (i >= list->size - 1)
+	{
+		int			j;
+
+		list->size = list->size * 2;
+		list->data = repalloc(list->data, sizeof(Oid) * list->size);
+
+		for (j = i; j < list->size; j++)
+			list->data[j] = InvalidOid;
+	}
+	list->data[i] = oid;
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement.
+ * Used for internal recursive calls.
+ */
+static void
+TSMapGetDictionariesInternal(TSMapElement *config, OidList *list)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapGetDictionariesInternal(config->value.objectExpression->left, list);
+			TSMapGetDictionariesInternal(config->value.objectExpression->right, list);
+			break;
+		case TSMAP_CASE:
+			TSMapGetDictionariesInternal(config->value.objectCase->command, list);
+			TSMapGetDictionariesInternal(config->value.objectCase->condition, list);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapGetDictionariesInternal(config->value.objectCase->elsebranch, list);
+			break;
+		case TSMAP_DICTIONARY:
+			OidListAdd(list, config->value.objectDictionary);
+			break;
+	}
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement
+ */
+Oid *
+TSMapGetDictionaries(TSMapElement *config)
+{
+	Oid		   *result;
+	OidList    *list = OidListInit();
+
+	TSMapGetDictionariesInternal(config, list);
+
+	result = list->data;
+	pfree(list);
+
+	return result;
+}
+
+/*
+ * Replace one dictionary OID with another in all instances inside a configuration
+ */
+void
+TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapReplaceDictionary(config->value.objectExpression->left, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectExpression->right, oldDict, newDict);
+			break;
+		case TSMAP_CASE:
+			TSMapReplaceDictionary(config->value.objectCase->command, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectCase->condition, oldDict, newDict);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapReplaceDictionary(config->value.objectCase->elsebranch, oldDict, newDict);
+			break;
+		case TSMAP_DICTIONARY:
+			if (config->value.objectDictionary == oldDict)
+				config->value.objectDictionary = newDict;
+			break;
+	}
+}
+
+/* ----------------
+ * Text Search Configuration Map Memory Management
+ * ----------------
+ */
+
+/*
+ * Move a FTS configuration expression to another memory context
+ */
+static TSMapElement *
+TSMapExpressionMoveToMemoryContext(TSMapExpression *expression, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapExpression *resultExpression = MemoryContextAlloc(context, sizeof(TSMapExpression));
+
+	memset(resultExpression, 0, sizeof(TSMapExpression));
+	result->value.objectExpression = resultExpression;
+	result->type = TSMAP_EXPRESSION;
+
+	resultExpression->operator = expression->operator;
+
+	resultExpression->left = TSMapMoveToMemoryContext(expression->left, context);
+	resultExpression->left->parent = result;
+
+	resultExpression->right = TSMapMoveToMemoryContext(expression->right, context);
+	resultExpression->right->parent = result;
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration case to another memory context
+ */
+static TSMapElement *
+TSMapCaseMoveToMemoryContext(TSMapCase *caseObject, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapCase  *resultCaseObject = MemoryContextAlloc(context, sizeof(TSMapCase));
+
+	memset(resultCaseObject, 0, sizeof(TSMapCase));
+	result->value.objectCase = resultCaseObject;
+	result->type = TSMAP_CASE;
+
+	resultCaseObject->match = caseObject->match;
+
+	resultCaseObject->command = TSMapMoveToMemoryContext(caseObject->command, context);
+	resultCaseObject->command->parent = result;
+
+	resultCaseObject->condition = TSMapMoveToMemoryContext(caseObject->condition, context);
+	resultCaseObject->condition->parent = result;
+
+	if (caseObject->elsebranch != NULL)
+	{
+		resultCaseObject->elsebranch = TSMapMoveToMemoryContext(caseObject->elsebranch, context);
+		resultCaseObject->elsebranch->parent = result;
+	}
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration to another memory context
+ */
+TSMapElement *
+TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context)
+{
+	TSMapElement *result = NULL;
+
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			result = TSMapExpressionMoveToMemoryContext(config->value.objectExpression, context);
+			break;
+		case TSMAP_CASE:
+			result = TSMapCaseMoveToMemoryContext(config->value.objectCase, context);
+			break;
+		case TSMAP_DICTIONARY:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_DICTIONARY;
+			result->value.objectDictionary = config->value.objectDictionary;
+			break;
+		case TSMAP_KEEP:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_KEEP;
+			result->value.object = NULL;
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains object with invalid type.")));
+			break;
+	}
+
+	return result;
+}
+
+/*
+ * Free memory occupied by FTS configuration expression
+ */
+static void
+TSMapExpressionFree(TSMapExpression *expression)
+{
+	if (expression->left)
+		TSMapElementFree(expression->left);
+	if (expression->right)
+		TSMapElementFree(expression->right);
+	pfree(expression);
+}
+
+/*
+ * Free memory occupied by FTS configuration case
+ */
+static void
+TSMapCaseFree(TSMapCase *caseObject)
+{
+	TSMapElementFree(caseObject->condition);
+	TSMapElementFree(caseObject->command);
+	TSMapElementFree(caseObject->elsebranch);
+	pfree(caseObject);
+}
+
+/*
+ * Free memory occupied by FTS configuration element
+ */
+void
+TSMapElementFree(TSMapElement *element)
+{
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_CASE:
+				TSMapCaseFree(element->value.objectCase);
+				break;
+			case TSMAP_EXPRESSION:
+				TSMapExpressionFree(element->value.objectExpression);
+				break;
+		}
+		pfree(element);
+	}
+}
+
+/*
+ * Do a deep comparison of two TSMapElements. Doesn't check parents of elements
+ */
+bool
+TSMapElementEquals(TSMapElement *a, TSMapElement *b)
+{
+	bool		result = true;
+
+	if (a->type == b->type)
+	{
+		switch (a->type)
+		{
+			case TSMAP_CASE:
+				if (!TSMapElementEquals(a->value.objectCase->condition, b->value.objectCase->condition))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectCase->command, b->value.objectCase->command))
+					result = false;
+
+				if (a->value.objectCase->elsebranch != NULL && b->value.objectCase->elsebranch != NULL)
+				{
+					if (!TSMapElementEquals(a->value.objectCase->elsebranch, b->value.objectCase->elsebranch))
+						result = false;
+				}
+				else if (a->value.objectCase->elsebranch != NULL || b->value.objectCase->elsebranch != NULL)
+					result = false;
+
+				if (a->value.objectCase->match != b->value.objectCase->match)
+					result = false;
+				break;
+			case TSMAP_EXPRESSION:
+				if (!TSMapElementEquals(a->value.objectExpression->left, b->value.objectExpression->left))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectExpression->right, b->value.objectExpression->right))
+					result = false;
+				if (a->value.objectExpression->operator != b->value.objectExpression->operator)
+					result = false;
+				break;
+			case TSMAP_DICTIONARY:
+				result = a->value.objectDictionary == b->value.objectDictionary;
+				break;
+			case TSMAP_KEEP:
+				result = true;
+		}
+	}
+	else
+		result = false;
+
+	return result;
+}
diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c
index 7b69ef5660..f476abb323 100644
--- a/src/backend/tsearch/ts_parse.c
+++ b/src/backend/tsearch/ts_parse.c
@@ -16,58 +16,157 @@
 
 #include "tsearch/ts_cache.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "funcapi.h"
 
 #define IGNORE_LONGLEXEME	1
 
-/*
+/*-------------------
  * Lexize subsystem
+ *-------------------
  */
 
+/*
+ * Representation of token produced by FTS parser. It contains intermediate
+ * lexemes in case of phrase dictionary processing.
+ */
 typedef struct ParsedLex
 {
-	int			type;
-	char	   *lemm;
-	int			lenlemm;
-	struct ParsedLex *next;
+	int			type;			/* Token type */
+	char	   *lemm;			/* Token itself */
+	int			lenlemm;		/* Length of the token string */
+	int			maplen;			/* Length of the map */
+	bool	   *accepted;		/* Is accepted by some dictionary */
+	bool	   *rejected;		/* Is rejected by all dictionaries */
+	bool	   *notFinished;	/* Some dictionary not finished processing and
+								 * waits for more tokens */
+	struct ParsedLex *next;		/* Next token in the list */
+	TSMapElement *relatedRule;	/* Rule which is used to produce lexemes from
+								 * the token */
 } ParsedLex;
 
+/*
+ * List of tokens produced by FTS parser.
+ */
 typedef struct ListParsedLex
 {
 	ParsedLex  *head;
 	ParsedLex  *tail;
 } ListParsedLex;
 
-typedef struct
+/*
+ * Dictionary state shared between processing of different tokens
+ */
+typedef struct DictState
 {
-	TSConfigCacheEntry *cfg;
-	Oid			curDictId;
-	int			posDict;
-	DictSubState dictState;
-	ParsedLex  *curSub;
-	ListParsedLex towork;		/* current list to work */
-	ListParsedLex waste;		/* list of lexemes that already lexized */
+	Oid			relatedDictionary;	/* DictState contains state of dictionary
+									 * with this Oid */
+	DictSubState subState;		/* Internal state of the dictionary used to
+								 * store some state between dictionary calls */
+	ListParsedLex acceptedTokens;	/* Tokens which are processed and
+									 * accepted, used in last returned result
+									 * by the dictionary */
+	ListParsedLex intermediateTokens;	/* Tokens which are not accepted, but
+										 * were processed by thesaurus-like
+										 * dictionary */
+	bool		storeToAccepted;	/* Should current token be appended to
+									 * accepted or intermediate tokens */
+	bool		processed;		/* Is the dictionary take control during
+								 * current token processing */
+	TSLexeme   *tmpResult;		/* Last result returned by thesaurus-like
+								 * dictionary, if dictionary still waiting for
+								 * more lexemes */
+} DictState;
 
-	/*
-	 * fields to store last variant to lexize (basically, thesaurus or similar
-	 * to, which wants	several lexemes
-	 */
+/*
+ * List of dictionary states
+ */
+typedef struct DictStateList
+{
+	int			listLength;
+	DictState  *states;
+} DictStateList;
 
-	ParsedLex  *lastRes;
-	TSLexeme   *tmpRes;
+/*
+ * Buffer entry with lexemes produced from current token
+ */
+typedef struct LexemesBufferEntry
+{
+	TSMapElement *key;	/* Element of the mapping configuration produced the entry */
+	ParsedLex  *token;	/* Token used for production of the lexemes */
+	TSLexeme   *data;	/* Lexemes produced from current token */
+} LexemesBufferEntry;
+
+/*
+ * Buffer with lexemes produced from current token
+ */
+typedef struct LexemesBuffer
+{
+	int			size;
+	LexemesBufferEntry *data;
+} LexemesBuffer;
+
+/*
+ * Storage for accepted and possible accepted lexemes
+ */
+typedef struct ResultStorage
+{
+	TSLexeme   *lexemes;		/* Processed lexemes, which is not yet
+								 * accepted */
+	TSLexeme   *accepted;		/* Already accepted lexemes */
+} ResultStorage;
+
+/*
+ * FTS processing context
+ */
+typedef struct LexizeData
+{
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	DictStateList dslist;		/* List of all currently stored states of
+								 * dictionaries */
+	ListParsedLex towork;		/* Current list to work */
+	ListParsedLex waste;		/* List of lexemes that already lexized */
+	LexemesBuffer buffer;		/* Buffer of processed lexemes. Used to avoid
+								 * multiple execution of token lexize process
+								 * with same parameters */
+	ResultStorage delayedResults;	/* Results that should be returned but may
+									 * be rejected in future */
+	Oid			skipDictionary; /* The dictionary we should skip during
+								 * processing. Used to avoid infinite loop in
+								 * configuration with phrase dictionary */
+	bool		debugContext;	/* If true, relatedRule attribute is filled */
 } LexizeData;
 
-static void
-LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+/*
+ * FTS processing debug context. Used during ts_debug calls.
+ */
+typedef struct TSDebugContext
 {
-	ld->cfg = cfg;
-	ld->curDictId = InvalidOid;
-	ld->posDict = 0;
-	ld->towork.head = ld->towork.tail = ld->curSub = NULL;
-	ld->waste.head = ld->waste.tail = NULL;
-	ld->lastRes = NULL;
-	ld->tmpRes = NULL;
-}
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	TSParserCacheEntry *prsobj; /* Parser context of current ts_debug context */
+	LexDescr   *tokenTypes;		/* Token types supported by current parser */
+	void	   *prsdata;		/* Parser data of current ts_debug context */
+	LexizeData	ldata;			/* Lexize data of current ts_debug context */
+	int			tokentype;		/* Last token tokentype */
+	TSLexeme   *savedLexemes;	/* Last token lexemes stored for ts_debug
+								 * output */
+	ParsedLex  *leftTokens;		/* Corresponded ParsedLex */
+} TSDebugContext;
+
+static TSLexeme *TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression);
+static TSLexeme *LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config);
+
+/*-------------------
+ * ListParsedLex API
+ *-------------------
+ */
 
+/*
+ * Add a ParsedLex to the end of the list
+ */
 static void
 LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 {
@@ -81,274 +180,1291 @@ LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 	newpl->next = NULL;
 }
 
-static ParsedLex *
-LPLRemoveHead(ListParsedLex *list)
-{
-	ParsedLex  *res = list->head;
+/*
+ * Add a copy of ParsedLex to the end of the list
+ */
+static void
+LPLAddTailCopy(ListParsedLex *list, ParsedLex *newpl)
+{
+	ParsedLex  *copy = palloc0(sizeof(ParsedLex));
+
+	copy->lenlemm = newpl->lenlemm;
+	copy->type = newpl->type;
+	copy->lemm = newpl->lemm;
+	copy->relatedRule = newpl->relatedRule;
+	copy->next = NULL;
+
+	if (list->tail)
+	{
+		list->tail->next = copy;
+		list->tail = copy;
+	}
+	else
+		list->head = list->tail = copy;
+}
+
+/*
+ * Remove the head of the list. Return pointer to detached head
+ */
+static ParsedLex *
+LPLRemoveHead(ListParsedLex *list)
+{
+	ParsedLex  *res = list->head;
+
+	if (list->head)
+		list->head = list->head->next;
+
+	if (list->head == NULL)
+		list->tail = NULL;
+
+	return res;
+}
+
+/*
+ * Remove all ParsedLex from the list
+ */
+static void
+LPLClear(ListParsedLex *list)
+{
+	ParsedLex  *tmp,
+			   *ptr = list->head;
+
+	while (ptr)
+	{
+		tmp = ptr->next;
+		pfree(ptr);
+		ptr = tmp;
+	}
+
+	list->head = list->tail = NULL;
+}
+
+/*-------------------
+ * LexizeData manipulation functions
+ *-------------------
+ */
+
+/*
+ * Initialize empty LexizeData object
+ */
+static void
+LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+{
+	ld->cfg = cfg;
+	ld->skipDictionary = InvalidOid;
+	ld->towork.head = ld->towork.tail = NULL;
+	ld->waste.head = ld->waste.tail = NULL;
+	ld->dslist.listLength = 0;
+	ld->dslist.states = NULL;
+	ld->buffer.size = 0;
+	ld->buffer.data = NULL;
+	ld->delayedResults.lexemes = NULL;
+	ld->delayedResults.accepted = NULL;
+}
+
+/*
+ * Add a token to the processing queue
+ */
+static void
+LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
+{
+	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+
+	newpl->type = type;
+	newpl->lemm = lemm;
+	newpl->lenlemm = lenlemm;
+	newpl->relatedRule = NULL;
+	LPLAddTail(&ld->towork, newpl);
+}
+
+/*
+ * Remove head of the processing queue
+ */
+static void
+RemoveHead(LexizeData *ld)
+{
+	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+}
+
+/*
+ * Set token corresponded to current lexeme
+ */
+static void
+setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+{
+	if (correspondLexem)
+		*correspondLexem = ld->waste.head;
+	else
+		LPLClear(&ld->waste);
+
+	ld->waste.head = ld->waste.tail = NULL;
+}
+
+/*-------------------
+ * DictState manipulation functions
+ *-------------------
+ */
+
+/*
+ * Get a state of dictionary based on its OID
+ */
+static DictState *
+DictStateListGet(DictStateList *list, Oid dictId)
+{
+	int			i;
+	DictState  *result = NULL;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			result = &list->states[i];
+
+	return result;
+}
+
+/*
+ * Remove a state of dictionary based on its OID
+ */
+static void
+DictStateListRemove(DictStateList *list, Oid dictId)
+{
+	int			i;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			break;
+
+	if (i != list->listLength)
+	{
+		memcpy(list->states + i, list->states + i + 1, sizeof(DictState) * (list->listLength - i - 1));
+		list->listLength--;
+		if (list->listLength == 0)
+			list->states = NULL;
+		else
+			list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	}
+}
+
+/*
+ * Insert a state of dictionary with specified OID
+ */
+static DictState *
+DictStateListAdd(DictStateList *list, DictState *state)
+{
+	DictStateListRemove(list, state->relatedDictionary);
+
+	list->listLength++;
+	if (list->states)
+		list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	else
+		list->states = palloc0(sizeof(DictState) * list->listLength);
+
+	memcpy(list->states + list->listLength - 1, state, sizeof(DictState));
+
+	return list->states + list->listLength - 1;
+}
+
+/*
+ * Remove states of all dictionaries
+ */
+static void
+DictStateListClear(DictStateList *list)
+{
+	list->listLength = 0;
+	if (list->states)
+		pfree(list->states);
+	list->states = NULL;
+}
+
+/*-------------------
+ * LexemesBuffer manipulation functions
+ *-------------------
+ */
+
+/*
+ * Check if there is a saved lexeme generated by specified TSMapElement
+ */
+static bool
+LexemesBufferContains(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			return true;
+
+	return false;
+}
+
+/*
+ * Get a saved lexeme generated by specified TSMapElement
+ */
+static TSLexeme *
+LexemesBufferGet(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+	TSLexeme   *result = NULL;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			result = buffer->data[i].data;
+
+	return result;
+}
+
+/*
+ * Remove a saved lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferRemove(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			break;
+
+	if (i != buffer->size)
+	{
+		memcpy(buffer->data + i, buffer->data + i + 1, sizeof(LexemesBufferEntry) * (buffer->size - i - 1));
+		buffer->size--;
+		if (buffer->size == 0)
+			buffer->data = NULL;
+		else
+			buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	}
+}
+
+/*
+ * Same a lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferAdd(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token, TSLexeme *data)
+{
+	LexemesBufferRemove(buffer, key, token);
+
+	buffer->size++;
+	if (buffer->data)
+		buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	else
+		buffer->data = palloc0(sizeof(LexemesBufferEntry) * buffer->size);
+
+	buffer->data[buffer->size - 1].token = token;
+	buffer->data[buffer->size - 1].key = key;
+	buffer->data[buffer->size - 1].data = data;
+}
+
+/*
+ * Remove all lexemes saved in a buffer
+ */
+static void
+LexemesBufferClear(LexemesBuffer *buffer)
+{
+	int			i;
+	bool	   *skipEntry = palloc0(sizeof(bool) * buffer->size);
+
+	for (i = 0; i < buffer->size; i++)
+	{
+		if (buffer->data[i].data != NULL && !skipEntry[i])
+		{
+			int			j;
+
+			for (j = 0; j < buffer->size; j++)
+				if (buffer->data[i].data == buffer->data[j].data)
+					skipEntry[j] = true;
+
+			pfree(buffer->data[i].data);
+		}
+	}
+
+	buffer->size = 0;
+	if (buffer->data)
+		pfree(buffer->data);
+	buffer->data = NULL;
+}
+
+/*-------------------
+ * TSLexeme util functions
+ *-------------------
+ */
+
+/*
+ * Get size of TSLexeme except empty-lexeme
+ */
+static int
+TSLexemeGetSize(TSLexeme *lex)
+{
+	int			result = 0;
+	TSLexeme   *ptr = lex;
+
+	while (ptr && ptr->lexeme)
+	{
+		result++;
+		ptr++;
+	}
+
+	return result;
+}
+
+/*
+ * Remove repeated lexemes. Also remove copies of whole nvariant groups.
+ */
+static TSLexeme *
+TSLexemeRemoveDuplications(TSLexeme *lexeme)
+{
+	TSLexeme   *res;
+	int			curLexIndex;
+	int			i;
+	int			lexemeSize = TSLexemeGetSize(lexeme);
+	int			shouldCopyCount = lexemeSize;
+	bool	   *shouldCopy;
+
+	if (lexeme == NULL)
+		return NULL;
+
+	shouldCopy = palloc(sizeof(bool) * lexemeSize);
+	memset(shouldCopy, true, sizeof(bool) * lexemeSize);
+
+	for (curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		for (i = curLexIndex + 1; i < lexemeSize; i++)
+		{
+			if (!shouldCopy[i])
+				continue;
+
+			if (strcmp(lexeme[curLexIndex].lexeme, lexeme[i].lexeme) == 0)
+			{
+				if (lexeme[curLexIndex].nvariant == lexeme[i].nvariant)
+				{
+					shouldCopy[i] = false;
+					shouldCopyCount--;
+					continue;
+				}
+				else
+				{
+					/*
+					 * Check for same set of lexemes in another nvariant
+					 * series
+					 */
+					int			nvariantCountL = 0;
+					int			nvariantCountR = 0;
+					int			nvariantOverlap = 1;
+					int			j;
+
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[curLexIndex].nvariant == lexeme[j].nvariant)
+							nvariantCountL++;
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[i].nvariant == lexeme[j].nvariant)
+							nvariantCountR++;
+
+					if (nvariantCountL != nvariantCountR)
+						continue;
+
+					for (j = 1; j < nvariantCountR; j++)
+					{
+						if (strcmp(lexeme[curLexIndex + j].lexeme, lexeme[i + j].lexeme) == 0
+							&& lexeme[curLexIndex + j].nvariant == lexeme[i + j].nvariant)
+							nvariantOverlap++;
+					}
+
+					if (nvariantOverlap != nvariantCountR)
+						continue;
+
+					for (j = 0; j < nvariantCountR; j++)
+						shouldCopy[i + j] = false;
+				}
+			}
+		}
+	}
+
+	res = palloc0(sizeof(TSLexeme) * (shouldCopyCount + 1));
+
+	for (i = 0, curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		if (shouldCopy[curLexIndex])
+		{
+			memcpy(res + i, lexeme + curLexIndex, sizeof(TSLexeme));
+			i++;
+		}
+	}
+
+	pfree(shouldCopy);
+	return res;
+}
+
+/*
+ * Combine two lexeme lists with respect to positions
+ */
+static TSLexeme *
+TSLexemeMergePositions(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+
+	if (left != NULL || right != NULL)
+	{
+		int			left_i = 0;
+		int			right_i = 0;
+		int			left_max_nvariant = 0;
+		int			i;
+		int			left_size = TSLexemeGetSize(left);
+		int			right_size = TSLexemeGetSize(right);
+
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		for (i = 0; i < right_size; i++)
+			right[i].nvariant += left_max_nvariant;
+		if (right && right[0].flags & TSL_ADDPOS)
+			right[0].flags &= ~TSL_ADDPOS;
+
+		i = 0;
+		while (i < left_size + right_size)
+		{
+			if (left_i < left_size)
+			{
+				do
+				{
+					result[i++] = left[left_i++];
+				} while (left && left[left_i].lexeme && (left[left_i].flags & TSL_ADDPOS) == 0);
+			}
+
+			if (right_i < right_size)
+			{
+				do
+				{
+					result[i++] = right[right_i++];
+				} while (right && right[right_i].lexeme && (right[right_i].flags & TSL_ADDPOS) == 0);
+			}
+		}
+	}
+	return result;
+}
+
+/*
+ * Split lexemes generated by regular dictionaries and multi-input dictionaries
+ * and combine them with respect to positions
+ */
+static TSLexeme *
+TSLexemeFilterMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *result;
+	TSLexeme   *ptr = lexemes;
+	int			multi_lexemes = 0;
+
+	while (ptr && ptr->lexeme)
+	{
+		if (ptr->flags & TSL_MULTI)
+			multi_lexemes++;
+		ptr++;
+	}
+
+	if (multi_lexemes > 0)
+	{
+		TSLexeme   *lexemes_multi = palloc0(sizeof(TSLexeme) * (multi_lexemes + 1));
+		TSLexeme   *lexemes_rest = palloc0(sizeof(TSLexeme) * (TSLexemeGetSize(lexemes) - multi_lexemes + 1));
+		int			rest_i = 0;
+		int			multi_i = 0;
+
+		ptr = lexemes;
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr->flags & TSL_MULTI)
+				lexemes_multi[multi_i++] = *ptr;
+			else
+				lexemes_rest[rest_i++] = *ptr;
+
+			ptr++;
+		}
+		result = TSLexemeMergePositions(lexemes_rest, lexemes_multi);
+	}
+	else
+	{
+		result = TSLexemeMergePositions(lexemes, NULL);
+	}
+
+	return result;
+}
+
+/*
+ * Mark lexemes as generated by multi-input (thesaurus-like) dictionary
+ */
+static void
+TSLexemeMarkMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *ptr = lexemes;
+
+	while (ptr && ptr->lexeme)
+	{
+		ptr->flags |= TSL_MULTI;
+		ptr++;
+	}
+}
+
+/*-------------------
+ * Lexemes set operations
+ *-------------------
+ */
+
+/*
+ * Combine left and right lexeme lists into one.
+ * If append is true, right lexemes added after last left lexeme with TSL_ADDPOS flag
+ */
+static TSLexeme *
+TSLexemeUnionOpt(TSLexeme *left, TSLexeme *right, bool append)
+{
+	TSLexeme   *result;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+	int			left_max_nvariant = 0;
+	int			i;
+
+	if (left == NULL && right == NULL)
+	{
+		result = NULL;
+	}
+	else
+	{
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		if (left_size > 0)
+			memcpy(result, left, sizeof(TSLexeme) * left_size);
+		if (right_size > 0)
+			memcpy(result + left_size, right, sizeof(TSLexeme) * right_size);
+		if (append && left_size > 0 && right_size > 0)
+			result[left_size].flags |= TSL_ADDPOS;
+
+		for (i = left_size; i < left_size + right_size; i++)
+			result[i].nvariant += left_max_nvariant;
+	}
+
+	return result;
+}
+
+/*
+ * Combine left and right lexeme lists into one
+ */
+static TSLexeme *
+TSLexemeUnion(TSLexeme *left, TSLexeme *right)
+{
+	return TSLexemeUnionOpt(left, right, false);
+}
+
+/*
+ * Remove common lexemes and return only which is stored in left list
+ */
+static TSLexeme *
+TSLexemeExcept(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (!found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*
+ * Keep only common lexemes
+ */
+static TSLexeme *
+TSLexemeIntersect(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*-------------------
+ * Result storage functions
+ *-------------------
+ */
+
+/*
+ * Add a lexeme to the result storage
+ */
+static void
+ResultStorageAdd(ResultStorage *storage, ParsedLex *token, TSLexeme *lexs)
+{
+	TSLexeme   *oldLexs = storage->lexemes;
+
+	storage->lexemes = TSLexemeUnionOpt(storage->lexemes, lexs, true);
+	if (oldLexs)
+		pfree(oldLexs);
+}
+
+/*
+ * Move all saved lexemes to accepted list
+ */
+static void
+ResultStorageMoveToAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+	{
+		TSLexeme   *prevAccepted = storage->accepted;
+
+		storage->accepted = TSLexemeUnionOpt(storage->accepted, storage->lexemes, true);
+		if (prevAccepted)
+			pfree(prevAccepted);
+		if (storage->lexemes)
+			pfree(storage->lexemes);
+	}
+	else
+	{
+		storage->accepted = storage->lexemes;
+	}
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all non-accepted lexemes
+ */
+static void
+ResultStorageClearLexemes(ResultStorage *storage)
+{
+	if (storage->lexemes)
+		pfree(storage->lexemes);
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all accepted lexemes
+ */
+static void
+ResultStorageClearAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+		pfree(storage->accepted);
+	storage->accepted = NULL;
+}
+
+/*-------------------
+ * Condition and command execution
+ *-------------------
+ */
+
+/*
+ * Process a token by the dictionary
+ */
+static TSLexeme *
+LexizeExecDictionary(LexizeData *ld, ParsedLex *token, TSMapElement *dictionary)
+{
+	TSLexeme   *res;
+	TSDictionaryCacheEntry *dict;
+	DictSubState subState;
+	Oid			dictId = dictionary->value.objectDictionary;
+
+	if (ld->skipDictionary == dictId)
+		return NULL;
+
+	if (LexemesBufferContains(&ld->buffer, dictionary, token))
+		res = LexemesBufferGet(&ld->buffer, dictionary, token);
+	else
+	{
+		char	   *curValLemm = token->lemm;
+		int			curValLenLemm = token->lenlemm;
+		DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+		dict = lookup_ts_dictionary_cache(dictId);
+
+		if (state)
+		{
+			subState = state->subState;
+			state->processed = true;
+		}
+		else
+		{
+			subState.isend = subState.getnext = false;
+			subState.private_state = NULL;
+		}
+
+		res = (TSLexeme *) DatumGetPointer(FunctionCall4(&(dict->lexize),
+														 PointerGetDatum(dict->dictData),
+														 PointerGetDatum(curValLemm),
+														 Int32GetDatum(curValLenLemm),
+														 PointerGetDatum(&subState)
+														 ));
+
+		if (subState.getnext)
+		{
+			/*
+			 * Dictionary wants next word, so store current context and state
+			 * in the DictStateList
+			 */
+			if (state == NULL)
+			{
+				state = palloc0(sizeof(DictState));
+				state->processed = true;
+				state->relatedDictionary = dictId;
+				state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				state->acceptedTokens.head = state->acceptedTokens.tail = NULL;
+				state->tmpResult = NULL;
+
+				/*
+				 * Add state to the list and update pointer in order to work
+				 * with copy from the list
+				 */
+				state = DictStateListAdd(&ld->dslist, state);
+			}
+
+			state->subState = subState;
+			state->storeToAccepted = res != NULL;
+
+			if (res)
+			{
+				if (state->intermediateTokens.head != NULL)
+				{
+					ParsedLex  *ptr = state->intermediateTokens.head;
+
+					while (ptr)
+					{
+						LPLAddTailCopy(&state->acceptedTokens, ptr);
+						ptr = ptr->next;
+					}
+					state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				}
+
+				if (state->tmpResult)
+					pfree(state->tmpResult);
+				TSLexemeMarkMulti(res);
+				state->tmpResult = res;
+				res = NULL;
+			}
+		}
+		else if (state != NULL)
+		{
+			if (res)
+			{
+				if (state)
+					TSLexemeMarkMulti(res);
+				DictStateListRemove(&ld->dslist, dictId);
+			}
+			else
+			{
+				/*
+				 * Trigger post-processing in order to check tmpResult and
+				 * restart processing (see LexizeExec function)
+				 */
+				state->processed = false;
+			}
+		}
+		LexemesBufferAdd(&ld->buffer, dictionary, token, res);
+	}
+
+	return res;
+}
+
+/*
+ * Check is dictionary waits for more tokens or not
+ */
+static bool
+LexizeExecDictionaryWaitNext(LexizeData *ld, Oid dictId)
+{
+	DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+	if (state)
+		return state->subState.getnext;
+	else
+		return false;
+}
+
+/*
+ * Check is dictionary result for current token is NULL or not.
+ * It dictionary waits for more lexemes, the result is interpreted as not null.
+ */
+static bool
+LexizeExecIsNull(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	bool		result = false;
+
+	if (config->type == TSMAP_EXPRESSION)
+	{
+		TSMapExpression *expression = config->value.objectExpression;
+
+		result = LexizeExecIsNull(ld, token, expression->left) || LexizeExecIsNull(ld, token, expression->right);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		Oid			dictOid = config->value.objectDictionary;
+		TSLexeme   *lexemes = LexizeExecDictionary(ld, token, config);
+
+		if (lexemes)
+			result = false;
+		else
+			result = !LexizeExecDictionaryWaitNext(ld, dictOid);
+	}
+	return result;
+}
+
+/*
+ * Execute a MAP operator
+ */
+static TSLexeme *
+TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression)
+{
+	TSLexeme   *left_res;
+	TSLexeme   *result = NULL;
+	int			left_size;
+	int			i;
+
+	left_res = LexizeExecTSElement(ld, token, expression->left);
+	left_size = TSLexemeGetSize(left_res);
+
+	if (left_res == NULL && LexizeExecIsNull(ld, token, expression->left))
+		result = LexizeExecTSElement(ld, token, expression->right);
+	else if (expression->operator == TSMAP_OP_COMMA &&
+			((left_res != NULL && (left_res->flags & TSL_FILTER) == 0) || left_res == NULL))
+		result = left_res;
+	else
+	{
+		TSMapElement *relatedRuleTmp = NULL;
+		relatedRuleTmp = palloc0(sizeof(TSMapElement));
+		relatedRuleTmp->parent = NULL;
+		relatedRuleTmp->type = TSMAP_EXPRESSION;
+		relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+		relatedRuleTmp->value.objectExpression->operator = expression->operator;
+		relatedRuleTmp->value.objectExpression->left = token->relatedRule;
+
+		for (i = 0; i < left_size; i++)
+		{
+			TSLexeme   *tmp_res = NULL;
+			TSLexeme   *prev_res;
+			ParsedLex	tmp_token;
+
+			tmp_token.lemm = left_res[i].lexeme;
+			tmp_token.lenlemm = strlen(left_res[i].lexeme);
+			tmp_token.type = token->type;
+			tmp_token.next = NULL;
+
+			tmp_res = LexizeExecTSElement(ld, &tmp_token, expression->right);
+			relatedRuleTmp->value.objectExpression->right = tmp_token.relatedRule;
+			prev_res = result;
+			result = TSLexemeUnion(prev_res, tmp_res);
+			if (prev_res)
+				pfree(prev_res);
+		}
+		token->relatedRule = relatedRuleTmp;
+	}
+
+	return result;
+}
+
+/*
+ * Execute a TSMapElement
+ * Common point of all possible types of TSMapElement
+ */
+static TSLexeme *
+LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	TSLexeme   *result = NULL;
+
+	if (LexemesBufferContains(&ld->buffer, config, token))
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexemesBufferGet(&ld->buffer, config, token);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexizeExecDictionary(ld, token, config);
+	}
+	else if (config->type == TSMAP_CASE)
+	{
+		TSMapCase  *caseObject = config->value.objectCase;
+		bool		conditionIsNull = LexizeExecIsNull(ld, token, caseObject->condition);
+
+		if ((!conditionIsNull && caseObject->match) || (conditionIsNull && !caseObject->match))
+		{
+			if (caseObject->command->type == TSMAP_KEEP)
+				result = LexizeExecTSElement(ld, token, caseObject->condition);
+			else
+				result = LexizeExecTSElement(ld, token, caseObject->command);
+		}
+		else if (caseObject->elsebranch)
+			result = LexizeExecTSElement(ld, token, caseObject->elsebranch);
+	}
+	else if (config->type == TSMAP_EXPRESSION)
+	{
+		TSLexeme   *resLeft = NULL;
+		TSLexeme   *resRight = NULL;
+		TSMapElement *relatedRuleTmp = NULL;
+		TSMapExpression *expression = config->value.objectExpression;
+
+		if (expression->operator != TSMAP_OP_MAP && expression->operator != TSMAP_OP_COMMA)
+		{
+			if (ld->debugContext)
+			{
+				relatedRuleTmp = palloc0(sizeof(TSMapElement));
+				relatedRuleTmp->parent = NULL;
+				relatedRuleTmp->type = TSMAP_EXPRESSION;
+				relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+				relatedRuleTmp->value.objectExpression->operator = expression->operator;
+			}
 
-	if (list->head)
-		list->head = list->head->next;
+			resLeft = LexizeExecTSElement(ld, token, expression->left);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->left = token->relatedRule;
 
-	if (list->head == NULL)
-		list->tail = NULL;
+			resRight = LexizeExecTSElement(ld, token, expression->right);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->right = token->relatedRule;
+		}
 
-	return res;
-}
+		switch (expression->operator)
+		{
+			case TSMAP_OP_UNION:
+				result = TSLexemeUnion(resLeft, resRight);
+				break;
+			case TSMAP_OP_EXCEPT:
+				result = TSLexemeExcept(resLeft, resRight);
+				break;
+			case TSMAP_OP_INTERSECT:
+				result = TSLexemeIntersect(resLeft, resRight);
+				break;
+			case TSMAP_OP_MAP:
+			case TSMAP_OP_COMMA:
+				result = TSLexemeMap(ld, token, expression);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains invalid expression operator.")));
+				break;
+		}
 
-static void
-LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
-{
-	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+		if (ld->debugContext && relatedRuleTmp != NULL)
+			token->relatedRule = relatedRuleTmp;
+	}
 
-	newpl->type = type;
-	newpl->lemm = lemm;
-	newpl->lenlemm = lenlemm;
-	LPLAddTail(&ld->towork, newpl);
-	ld->curSub = ld->towork.tail;
+	if (!LexemesBufferContains(&ld->buffer, config, token))
+		LexemesBufferAdd(&ld->buffer, config, token, result);
+
+	return result;
 }
 
-static void
-RemoveHead(LexizeData *ld)
+/*-------------------
+ * LexizeExec and helpers functions
+ *-------------------
+ */
+
+/*
+ * Processing of EOF-like token.
+ * Return all temporary results if any are saved.
+ */
+static TSLexeme *
+LexizeExecFinishProcessing(LexizeData *ld)
 {
-	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+	int			i;
+	TSLexeme   *res = NULL;
+
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		TSLexeme   *last_res = res;
 
-	ld->posDict = 0;
+		res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+		if (last_res)
+			pfree(last_res);
+	}
+
+	return res;
 }
 
-static void
-setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+/*
+ * Get last accepted result of the phrase-dictionary
+ */
+static TSLexeme *
+LexizeExecGetPreviousResults(LexizeData *ld)
 {
-	if (correspondLexem)
-	{
-		*correspondLexem = ld->waste.head;
-	}
-	else
-	{
-		ParsedLex  *tmp,
-				   *ptr = ld->waste.head;
+	int			i;
+	TSLexeme   *res = NULL;
 
-		while (ptr)
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		if (!ld->dslist.states[i].processed)
 		{
-			tmp = ptr->next;
-			pfree(ptr);
-			ptr = tmp;
+			TSLexeme   *last_res = res;
+
+			res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+			if (last_res)
+				pfree(last_res);
 		}
 	}
-	ld->waste.head = ld->waste.tail = NULL;
+
+	return res;
 }
 
+/*
+ * Remove all dictionary states which wasn't used for current token
+ */
 static void
-moveToWaste(LexizeData *ld, ParsedLex *stop)
+LexizeExecClearDictStates(LexizeData *ld)
 {
-	bool		go = true;
+	int			i;
 
-	while (ld->towork.head && go)
+	for (i = 0; i < ld->dslist.listLength; i++)
 	{
-		if (ld->towork.head == stop)
+		if (!ld->dslist.states[i].processed)
 		{
-			ld->curSub = stop->next;
-			go = false;
+			DictStateListRemove(&ld->dslist, ld->dslist.states[i].relatedDictionary);
+			i = 0;
 		}
-		RemoveHead(ld);
 	}
 }
 
-static void
-setNewTmpRes(LexizeData *ld, ParsedLex *lex, TSLexeme *res)
+/*
+ * Check if there are any dictionaries that didn't processed current token
+ */
+static bool
+LexizeExecNotProcessedDictStates(LexizeData *ld)
 {
-	if (ld->tmpRes)
-	{
-		TSLexeme   *ptr;
+	int			i;
 
-		for (ptr = ld->tmpRes; ptr->lexeme; ptr++)
-			pfree(ptr->lexeme);
-		pfree(ld->tmpRes);
-	}
-	ld->tmpRes = res;
-	ld->lastRes = lex;
+	for (i = 0; i < ld->dslist.listLength; i++)
+		if (!ld->dslist.states[i].processed)
+			return true;
+
+	return false;
 }
 
+/*
+ * Do a lexize processing for a towork queue in LexizeData
+ */
 static TSLexeme *
 LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 {
+	ParsedLex  *token;
+	TSMapElement *config;
+	TSLexeme   *res = NULL;
+	TSLexeme   *prevIterationResult = NULL;
+	bool		removeHead = false;
+	bool		resetSkipDictionary = false;
+	bool		accepted = false;
 	int			i;
-	ListDictionary *map;
-	TSDictionaryCacheEntry *dict;
-	TSLexeme   *res;
 
-	if (ld->curDictId == InvalidOid)
+	for (i = 0; i < ld->dslist.listLength; i++)
+		ld->dslist.states[i].processed = false;
+	if (ld->skipDictionary != InvalidOid)
+		resetSkipDictionary = true;
+
+	token = ld->towork.head;
+	if (token == NULL)
 	{
-		/*
-		 * usual mode: dictionary wants only one word, but we should keep in
-		 * mind that we should go through all stack
-		 */
+		setCorrLex(ld, correspondLexem);
+		return NULL;
+	}
 
-		while (ld->towork.head)
+	if (token->type >= ld->cfg->lenmap)
+	{
+		removeHead = true;
+	}
+	else
+	{
+		config = ld->cfg->map[token->type];
+		if (config != NULL)
+		{
+			res = LexizeExecTSElement(ld, token, config);
+			prevIterationResult = LexizeExecGetPreviousResults(ld);
+			removeHead = prevIterationResult == NULL;
+		}
+		else
 		{
-			ParsedLex  *curVal = ld->towork.head;
-			char	   *curValLemm = curVal->lemm;
-			int			curValLenLemm = curVal->lenlemm;
+			removeHead = true;
+			if (token->type == 0)	/* Processing EOF-like token */
+			{
+				res = LexizeExecFinishProcessing(ld);
+				prevIterationResult = NULL;
+			}
+		}
 
-			map = ld->cfg->map + curVal->type;
+		if (LexizeExecNotProcessedDictStates(ld) && (token->type == 0 || config != NULL))	/* Rollback processing */
+		{
+			int			i;
+			ListParsedLex *intermediateTokens = NULL;
+			ListParsedLex *acceptedTokens = NULL;
 
-			if (curVal->type == 0 || curVal->type >= ld->cfg->lenmap || map->len == 0)
+			for (i = 0; i < ld->dslist.listLength; i++)
 			{
-				/* skip this type of lexeme */
-				RemoveHead(ld);
-				continue;
+				if (!ld->dslist.states[i].processed)
+				{
+					intermediateTokens = &ld->dslist.states[i].intermediateTokens;
+					acceptedTokens = &ld->dslist.states[i].acceptedTokens;
+					if (prevIterationResult == NULL)
+						ld->skipDictionary = ld->dslist.states[i].relatedDictionary;
+				}
 			}
 
-			for (i = ld->posDict; i < map->len; i++)
+			if (intermediateTokens && intermediateTokens->head)
 			{
-				dict = lookup_ts_dictionary_cache(map->dictIds[i]);
-
-				ld->dictState.isend = ld->dictState.getnext = false;
-				ld->dictState.private_state = NULL;
-				res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-																 &(dict->lexize),
-																 PointerGetDatum(dict->dictData),
-																 PointerGetDatum(curValLemm),
-																 Int32GetDatum(curValLenLemm),
-																 PointerGetDatum(&ld->dictState)
-																 ));
-
-				if (ld->dictState.getnext)
+				ParsedLex  *head = ld->towork.head;
+
+				ld->towork.head = intermediateTokens->head;
+				intermediateTokens->tail->next = head;
+				head->next = NULL;
+				ld->towork.tail = head;
+				removeHead = false;
+				LPLClear(&ld->waste);
+				if (acceptedTokens && acceptedTokens->head)
 				{
-					/*
-					 * dictionary wants next word, so setup and store current
-					 * position and go to multiword mode
-					 */
-
-					ld->curDictId = DatumGetObjectId(map->dictIds[i]);
-					ld->posDict = i + 1;
-					ld->curSub = curVal->next;
-					if (res)
-						setNewTmpRes(ld, curVal, res);
-					return LexizeExec(ld, correspondLexem);
+					ld->waste.head = acceptedTokens->head;
+					ld->waste.tail = acceptedTokens->tail;
 				}
+			}
+			ResultStorageClearLexemes(&ld->delayedResults);
+			if (config != NULL)
+				res = NULL;
+		}
 
-				if (!res)		/* dictionary doesn't know this lexeme */
-					continue;
+		if (config != NULL)
+			LexizeExecClearDictStates(ld);
+		else if (token->type == 0)
+			DictStateListClear(&ld->dslist);
+	}
 
-				if (res->flags & TSL_FILTER)
-				{
-					curValLemm = res->lexeme;
-					curValLenLemm = strlen(res->lexeme);
-					continue;
-				}
+	if (prevIterationResult)
+		res = prevIterationResult;
+	else
+	{
+		int			i;
 
-				RemoveHead(ld);
-				setCorrLex(ld, correspondLexem);
-				return res;
+		for (i = 0; i < ld->dslist.listLength; i++)
+		{
+			if (ld->dslist.states[i].storeToAccepted)
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].acceptedTokens, token);
+				accepted = true;
+				ld->dslist.states[i].storeToAccepted = false;
+			}
+			else
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].intermediateTokens, token);
 			}
-
-			RemoveHead(ld);
 		}
 	}
-	else
-	{							/* curDictId is valid */
-		dict = lookup_ts_dictionary_cache(ld->curDictId);
 
+	if (removeHead)
+		RemoveHead(ld);
+
+	if (ld->dslist.listLength > 0)
+	{
 		/*
-		 * Dictionary ld->curDictId asks  us about following words
+		 * There is at least one thesaurus dictionary in the middle of
+		 * processing. Delay return of the result to avoid wrong lexemes in
+		 * case of thesaurus phrase rejection.
 		 */
+		ResultStorageAdd(&ld->delayedResults, token, res);
+		if (accepted)
+			ResultStorageMoveToAccepted(&ld->delayedResults);
 
-		while (ld->curSub)
+		/*
+		 * Current value of res should not be cleared, because it is stored in
+		 * LexemesBuffer
+		 */
+		res = NULL;
+	}
+	else
+	{
+		if (ld->towork.head == NULL)
 		{
-			ParsedLex  *curVal = ld->curSub;
-
-			map = ld->cfg->map + curVal->type;
-
-			if (curVal->type != 0)
-			{
-				bool		dictExists = false;
-
-				if (curVal->type >= ld->cfg->lenmap || map->len == 0)
-				{
-					/* skip this type of lexeme */
-					ld->curSub = curVal->next;
-					continue;
-				}
+			TSLexeme   *oldAccepted = ld->delayedResults.accepted;
 
-				/*
-				 * We should be sure that current type of lexeme is recognized
-				 * by our dictionary: we just check is it exist in list of
-				 * dictionaries ?
-				 */
-				for (i = 0; i < map->len && !dictExists; i++)
-					if (ld->curDictId == DatumGetObjectId(map->dictIds[i]))
-						dictExists = true;
-
-				if (!dictExists)
-				{
-					/*
-					 * Dictionary can't work with current tpe of lexeme,
-					 * return to basic mode and redo all stored lexemes
-					 */
-					ld->curDictId = InvalidOid;
-					return LexizeExec(ld, correspondLexem);
-				}
-			}
+			ld->delayedResults.accepted = TSLexemeUnionOpt(ld->delayedResults.accepted, ld->delayedResults.lexemes, true);
+			if (oldAccepted)
+				pfree(oldAccepted);
+		}
 
-			ld->dictState.isend = (curVal->type == 0) ? true : false;
-			ld->dictState.getnext = false;
+		/*
+		 * Add accepted delayed results to the output of the parsing. All
+		 * lexemes returned during thesaurus phrase processing should be
+		 * returned simultaneously, since all phrase tokens are processed as
+		 * one.
+		 */
+		if (ld->delayedResults.accepted != NULL)
+		{
+			/*
+			 * Previous value of res should not be cleared, because it is
+			 * stored in LexemesBuffer
+			 */
+			res = TSLexemeUnionOpt(ld->delayedResults.accepted, res, prevIterationResult == NULL);
 
-			res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-															 &(dict->lexize),
-															 PointerGetDatum(dict->dictData),
-															 PointerGetDatum(curVal->lemm),
-															 Int32GetDatum(curVal->lenlemm),
-															 PointerGetDatum(&ld->dictState)
-															 ));
+			ResultStorageClearLexemes(&ld->delayedResults);
+			ResultStorageClearAccepted(&ld->delayedResults);
+		}
+		setCorrLex(ld, correspondLexem);
+	}
 
-			if (ld->dictState.getnext)
-			{
-				/* Dictionary wants one more */
-				ld->curSub = curVal->next;
-				if (res)
-					setNewTmpRes(ld, curVal, res);
-				continue;
-			}
+	if (resetSkipDictionary)
+		ld->skipDictionary = InvalidOid;
 
-			if (res || ld->tmpRes)
-			{
-				/*
-				 * Dictionary normalizes lexemes, so we remove from stack all
-				 * used lexemes, return to basic mode and redo end of stack
-				 * (if it exists)
-				 */
-				if (res)
-				{
-					moveToWaste(ld, ld->curSub);
-				}
-				else
-				{
-					res = ld->tmpRes;
-					moveToWaste(ld, ld->lastRes);
-				}
+	res = TSLexemeFilterMulti(res);
+	if (res)
+		res = TSLexemeRemoveDuplications(res);
 
-				/* reset to initial state */
-				ld->curDictId = InvalidOid;
-				ld->posDict = 0;
-				ld->lastRes = NULL;
-				ld->tmpRes = NULL;
-				setCorrLex(ld, correspondLexem);
-				return res;
-			}
+	/*
+	 * Copy result since it may be stored in LexemesBuffere and removed at the
+	 * next step.
+	 */
+	if (res)
+	{
+		TSLexeme   *oldRes = res;
+		int			resSize = TSLexemeGetSize(res);
 
-			/*
-			 * Dict don't want next lexem and didn't recognize anything, redo
-			 * from ld->towork.head
-			 */
-			ld->curDictId = InvalidOid;
-			return LexizeExec(ld, correspondLexem);
-		}
+		res = palloc0(sizeof(TSLexeme) * (resSize + 1));
+		memcpy(res, oldRes, sizeof(TSLexeme) * resSize);
 	}
 
-	setCorrLex(ld, correspondLexem);
-	return NULL;
+	LexemesBufferClear(&ld->buffer);
+	return res;
 }
 
+/*-------------------
+ * ts_parse API functions
+ *-------------------
+ */
+
 /*
  * Parse string and lexize words.
  *
@@ -357,7 +1473,7 @@ LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 void
 parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
@@ -375,36 +1491,42 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		while ((norms = LexizeExec(&ldata, NULL)) != NULL)
 		{
-			TSLexeme   *ptr = norms;
+			TSLexeme   *ptr;
+
+			ptr = norms;
 
 			prs->pos++;			/* set pos */
 
@@ -429,14 +1551,246 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 			}
 			pfree(norms);
 		}
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
 
+/*-------------------
+ * ts_debug and helper functions
+ *-------------------
+ */
+
+/*
+ * Free memory occupied by temporary TSMapElement
+ */
+
+static void
+ts_debug_free_rule(TSMapElement *element)
+{
+	if (element != NULL && element->type == TSMAP_EXPRESSION)
+	{
+		ts_debug_free_rule(element->value.objectExpression->left);
+		ts_debug_free_rule(element->value.objectExpression->right);
+		pfree(element->value.objectExpression);
+		pfree(element);
+	}
+}
+
+/*
+ * Initialize SRF context and text parser for ts_debug execution.
+ */
+static void
+ts_debug_init(Oid cfgId, text *inputText, FunctionCallInfo fcinfo)
+{
+	TupleDesc	tupdesc;
+	char	   *buf;
+	int			buflen;
+	FuncCallContext *funcctx;
+	MemoryContext oldcontext;
+	TSDebugContext *context;
+
+	funcctx = SRF_FIRSTCALL_INIT();
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+	buf = text_to_cstring(inputText);
+	buflen = strlen(buf);
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("function returning record called in context "
+						"that cannot accept type record")));
+
+	funcctx->user_fctx = palloc0(sizeof(TSDebugContext));
+	funcctx->attinmeta = TupleDescGetAttInMetadata(tupdesc);
+
+	context = funcctx->user_fctx;
+	context->cfg = lookup_ts_config_cache(cfgId);
+	context->prsobj = lookup_ts_parser_cache(context->cfg->prsId);
+
+	context->tokenTypes = (LexDescr *) DatumGetPointer(OidFunctionCall1(context->prsobj->lextypeOid,
+																		(Datum) 0));
+
+	context->prsdata = (void *) DatumGetPointer(FunctionCall2(&context->prsobj->prsstart,
+															  PointerGetDatum(buf),
+															  Int32GetDatum(buflen)));
+	LexizeInit(&context->ldata, context->cfg);
+	context->ldata.debugContext = true;
+	context->tokentype = 1;
+
+	MemoryContextSwitchTo(oldcontext);
+}
+
+/*
+ * Get one token from input text and add it to processing queue.
+ */
+static void
+ts_debug_get_token(FuncCallContext *funcctx)
+{
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+	int			lenlemm;
+	char	   *lemm = NULL;
+
+	context = funcctx->user_fctx;
+
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+	context->tokentype = DatumGetInt32(FunctionCall3(&(context->prsobj->prstoken),
+													 PointerGetDatum(context->prsdata),
+													 PointerGetDatum(&lemm),
+													 PointerGetDatum(&lenlemm)));
+
+	if (context->tokentype > 0 && lenlemm >= MAXSTRLEN)
+	{
+#ifdef IGNORE_LONGLEXEME
+		ereport(NOTICE,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#else
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#endif
+	}
+
+	LexizeAddLemm(&context->ldata, context->tokentype, lemm, lenlemm);
+	MemoryContextSwitchTo(oldcontext);
+}
+
 /*
+ * Parse text and print debug information, such as token type, dictionary map
+ * configuration, selected command and lexemes for each token.
+ * Arguments: regconfiguration(Oid) cfgId, text *inputText
+ */
+Datum
+ts_debug(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		Oid			cfgId = PG_GETARG_OID(0);
+		text	   *inputText = PG_GETARG_TEXT_P(1);
+
+		ts_debug_init(cfgId, inputText, fcinfo);
+	}
+
+	funcctx = SRF_PERCALL_SETUP();
+	context = funcctx->user_fctx;
+
+	while (context->tokentype > 0 && context->leftTokens == NULL)
+	{
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+		ts_debug_get_token(funcctx);
+
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	while (context->leftTokens == NULL && context->ldata.towork.head != NULL)
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+	if (context->leftTokens && context->leftTokens && context->leftTokens->type > 0)
+	{
+		HeapTuple	tuple;
+		Datum		result;
+		char	  **values;
+		ParsedLex  *lex = context->leftTokens;
+		StringInfo	str = NULL;
+		TSLexeme   *ptr;
+
+		values = palloc0(sizeof(char *) * 7);
+		str = makeStringInfo();
+		initStringInfo(str);
+
+		values[0] = context->tokenTypes[lex->type - 1].alias;
+		values[1] = context->tokenTypes[lex->type - 1].descr;
+
+		values[2] = palloc0(sizeof(char) * (lex->lenlemm + 1));
+		memcpy(values[2], lex->lemm, sizeof(char) * lex->lenlemm);
+
+		initStringInfo(str);
+		appendStringInfoChar(str, '{');
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			Oid *dictionaries = TSMapGetDictionaries(context->ldata.cfg->map[lex->type]);
+			Oid *currentDictionary = NULL;
+			for (currentDictionary = dictionaries; *currentDictionary != InvalidOid; currentDictionary++)
+			{
+				if (currentDictionary != dictionaries)
+					appendStringInfoChar(str, ',');
+
+				TSMapPrintDictName(*currentDictionary, str);
+			}
+		}
+		appendStringInfoChar(str, '}');
+		values[3] = str->data;
+
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			initStringInfo(str);
+			TSMapPrintElement(context->ldata.cfg->map[lex->type], str);
+			values[4] = str->data;
+
+			initStringInfo(str);
+			if (lex->relatedRule)
+			{
+				TSMapPrintElement(lex->relatedRule, str);
+				values[5] = str->data;
+				str = makeStringInfo();
+				initStringInfo(str);
+				ts_debug_free_rule(lex->relatedRule);
+				lex->relatedRule = NULL;
+			}
+		}
+
+		initStringInfo(str);
+		ptr = context->savedLexemes;
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '{');
+
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr != context->savedLexemes)
+				appendStringInfoString(str, ", ");
+			appendStringInfoString(str, ptr->lexeme);
+			ptr++;
+		}
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '}');
+		if (context->savedLexemes)
+			values[6] = str->data;
+		else
+			values[6] = NULL;
+
+		tuple = BuildTupleFromCStrings(funcctx->attinmeta, values);
+		result = HeapTupleGetDatum(tuple);
+
+		context->leftTokens = lex->next;
+		pfree(lex);
+		if (context->leftTokens == NULL && context->savedLexemes)
+			pfree(context->savedLexemes);
+
+		SRF_RETURN_NEXT(funcctx, result);
+	}
+
+	FunctionCall1(&(context->prsobj->prsend), PointerGetDatum(context->prsdata));
+	SRF_RETURN_DONE(funcctx);
+}
+
+/*-------------------
  * Headline framework
+ *-------------------
  */
+
 static void
 hladdword(HeadlineParsedText *prs, char *buf, int buflen, int type)
 {
@@ -532,12 +1886,12 @@ addHLParsedLex(HeadlineParsedText *prs, TSQuery query, ParsedLex *lexs, TSLexeme
 void
 hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
 	TSLexeme   *norms;
-	ParsedLex  *lexs;
+	ParsedLex  *lexs = NULL;
 	TSConfigCacheEntry *cfg;
 	TSParserCacheEntry *prsobj;
 	void	   *prsdata;
@@ -551,32 +1905,36 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		do
 		{
@@ -587,9 +1945,10 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 			}
 			else
 				addHLParsedLex(prs, query, lexs, NULL);
+			lexs = NULL;
 		} while (norms);
 
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
@@ -642,14 +2001,14 @@ generateHeadline(HeadlineParsedText *prs)
 			}
 			else if (!wrd->skip)
 			{
-				if (wrd->selected)
+				if (wrd->selected && (wrd == prs->words || !(wrd - 1)->selected))
 				{
 					memcpy(ptr, prs->startsel, prs->startsellen);
 					ptr += prs->startsellen;
 				}
 				memcpy(ptr, wrd->word, wrd->len);
 				ptr += wrd->len;
-				if (wrd->selected)
+				if (wrd->selected && ((wrd + 1 - prs->words) == prs->curwords || !(wrd + 1)->selected))
 				{
 					memcpy(ptr, prs->stopsel, prs->stopsellen);
 					ptr += prs->stopsellen;
diff --git a/src/backend/tsearch/ts_utils.c b/src/backend/tsearch/ts_utils.c
index f6e03aea4f..0dd846bece 100644
--- a/src/backend/tsearch/ts_utils.c
+++ b/src/backend/tsearch/ts_utils.c
@@ -20,7 +20,6 @@
 #include "tsearch/ts_locale.h"
 #include "tsearch/ts_utils.h"
 
-
 /*
  * Given the base name and extension of a tsearch config file, return
  * its full path name.  The base name is assumed to be user-supplied,
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 2b381782a3..f251e83ff6 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -828,11 +828,10 @@ static const struct cachedesc cacheinfo[] = {
 	},
 	{TSConfigMapRelationId,		/* TSCONFIGMAP */
 		TSConfigMapIndexId,
-		3,
+		2,
 		{
 			Anum_pg_ts_config_map_mapcfg,
 			Anum_pg_ts_config_map_maptokentype,
-			Anum_pg_ts_config_map_mapseqno,
 			0
 		},
 		2
diff --git a/src/backend/utils/cache/ts_cache.c b/src/backend/utils/cache/ts_cache.c
index f11cba4cce..c0f98bad30 100644
--- a/src/backend/utils/cache/ts_cache.c
+++ b/src/backend/utils/cache/ts_cache.c
@@ -39,6 +39,7 @@
 #include "catalog/pg_ts_template.h"
 #include "commands/defrem.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/catcache.h"
 #include "utils/fmgroids.h"
@@ -51,13 +52,12 @@
 
 
 /*
- * MAXTOKENTYPE/MAXDICTSPERTT are arbitrary limits on the workspace size
+ * MAXTOKENTYPE is arbitrary limits on the workspace size
  * used in lookup_ts_config_cache().  We could avoid hardwiring a limit
  * by making the workspace dynamically enlargeable, but it seems unlikely
  * to be worth the trouble.
  */
-#define MAXTOKENTYPE	256
-#define MAXDICTSPERTT	100
+#define MAXTOKENTYPE		256
 
 
 static HTAB *TSParserCacheHash = NULL;
@@ -418,11 +418,10 @@ lookup_ts_config_cache(Oid cfgId)
 		ScanKeyData mapskey;
 		SysScanDesc mapscan;
 		HeapTuple	maptup;
-		ListDictionary maplists[MAXTOKENTYPE + 1];
-		Oid			mapdicts[MAXDICTSPERTT];
+		TSMapElement *mapconfigs[MAXTOKENTYPE + 1];
 		int			maxtokentype;
-		int			ndicts;
 		int			i;
+		TSMapElement *tmpConfig;
 
 		tp = SearchSysCache1(TSCONFIGOID, ObjectIdGetDatum(cfgId));
 		if (!HeapTupleIsValid(tp))
@@ -453,8 +452,8 @@ lookup_ts_config_cache(Oid cfgId)
 			if (entry->map)
 			{
 				for (i = 0; i < entry->lenmap; i++)
-					if (entry->map[i].dictIds)
-						pfree(entry->map[i].dictIds);
+					if (entry->map[i])
+						TSMapElementFree(entry->map[i]);
 				pfree(entry->map);
 			}
 		}
@@ -468,13 +467,11 @@ lookup_ts_config_cache(Oid cfgId)
 		/*
 		 * Scan pg_ts_config_map to gather dictionary list for each token type
 		 *
-		 * Because the index is on (mapcfg, maptokentype, mapseqno), we will
-		 * see the entries in maptokentype order, and in mapseqno order for
-		 * each token type, even though we didn't explicitly ask for that.
+		 * Because the index is on (mapcfg, maptokentype), we will see the
+		 * entries in maptokentype order even though we didn't explicitly ask
+		 * for that.
 		 */
-		MemSet(maplists, 0, sizeof(maplists));
 		maxtokentype = 0;
-		ndicts = 0;
 
 		ScanKeyInit(&mapskey,
 					Anum_pg_ts_config_map_mapcfg,
@@ -486,6 +483,7 @@ lookup_ts_config_cache(Oid cfgId)
 		mapscan = systable_beginscan_ordered(maprel, mapidx,
 											 NULL, 1, &mapskey);
 
+		memset(mapconfigs, 0, sizeof(mapconfigs));
 		while ((maptup = systable_getnext_ordered(mapscan, ForwardScanDirection)) != NULL)
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
@@ -495,51 +493,27 @@ lookup_ts_config_cache(Oid cfgId)
 				elog(ERROR, "maptokentype value %d is out of range", toktype);
 			if (toktype < maxtokentype)
 				elog(ERROR, "maptokentype entries are out of order");
-			if (toktype > maxtokentype)
-			{
-				/* starting a new token type, but first save the prior data */
-				if (ndicts > 0)
-				{
-					maplists[maxtokentype].len = ndicts;
-					maplists[maxtokentype].dictIds = (Oid *)
-						MemoryContextAlloc(CacheMemoryContext,
-										   sizeof(Oid) * ndicts);
-					memcpy(maplists[maxtokentype].dictIds, mapdicts,
-						   sizeof(Oid) * ndicts);
-				}
-				maxtokentype = toktype;
-				mapdicts[0] = cfgmap->mapdict;
-				ndicts = 1;
-			}
-			else
-			{
-				/* continuing data for current token type */
-				if (ndicts >= MAXDICTSPERTT)
-					elog(ERROR, "too many pg_ts_config_map entries for one token type");
-				mapdicts[ndicts++] = cfgmap->mapdict;
-			}
+
+			maxtokentype = toktype;
+			tmpConfig = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			mapconfigs[maxtokentype] = TSMapMoveToMemoryContext(tmpConfig, CacheMemoryContext);
+			TSMapElementFree(tmpConfig);
+			tmpConfig = NULL;
 		}
 
 		systable_endscan_ordered(mapscan);
 		index_close(mapidx, AccessShareLock);
 		heap_close(maprel, AccessShareLock);
 
-		if (ndicts > 0)
+		if (maxtokentype > 0)
 		{
-			/* save the last token type's dictionaries */
-			maplists[maxtokentype].len = ndicts;
-			maplists[maxtokentype].dictIds = (Oid *)
-				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(Oid) * ndicts);
-			memcpy(maplists[maxtokentype].dictIds, mapdicts,
-				   sizeof(Oid) * ndicts);
-			/* and save the overall map */
+			/* save the overall map */
 			entry->lenmap = maxtokentype + 1;
-			entry->map = (ListDictionary *)
+			entry->map = (TSMapElement * *)
 				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(ListDictionary) * entry->lenmap);
-			memcpy(entry->map, maplists,
-				   sizeof(ListDictionary) * entry->lenmap);
+								   sizeof(TSMapElement *) * entry->lenmap);
+			memcpy(entry->map, mapconfigs,
+				   sizeof(TSMapElement *) * entry->lenmap);
 		}
 
 		entry->isvalid = true;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 463639208d..709ee0e322 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -14284,15 +14284,29 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 	PQclear(res);
 
 	resetPQExpBuffer(query);
-	appendPQExpBuffer(query,
-					  "SELECT\n"
-					  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
-					  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
-					  "FROM pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE m.mapcfg = '%u'\n"
-					  "ORDER BY m.mapcfg, m.maptokentype, m.mapseqno",
-					  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+
+	if (fout->remoteVersion >= 110000)
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+	else
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, m.mapseqno\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
 	ntups = PQntuples(res);
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 6e08515857..52f56ad9f4 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -4655,25 +4655,41 @@ describeOneTSConfig(const char *oid, const char *nspname, const char *cfgname,
 
 	initPQExpBuffer(&buf);
 
-	printfPQExpBuffer(&buf,
-					  "SELECT\n"
-					  "  ( SELECT t.alias FROM\n"
-					  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
-					  "  pg_catalog.btrim(\n"
-					  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
-					  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
-					  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
-					  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
-					  "    ) :: pg_catalog.text,\n"
-					  "  '{}') AS \"%s\"\n"
-					  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
-					  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
-					  "ORDER BY 1;",
-					  gettext_noop("Token"),
-					  gettext_noop("Dictionaries"),
-					  oid);
+	if (pset.sversion >= 110000)
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  " dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+	else
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  "  pg_catalog.btrim(\n"
+						  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
+						  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
+						  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
+						  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
+						  "    ) :: pg_catalog.text,\n"
+						  "  '{}') AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+
 
 	res = PSQLexec(buf.data);
 	termPQExpBuffer(&buf);
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index 24915824ca..2e9e496692 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -261,7 +261,7 @@ DECLARE_UNIQUE_INDEX(pg_ts_config_cfgname_index, 3608, on pg_ts_config using btr
 DECLARE_UNIQUE_INDEX(pg_ts_config_oid_index, 3712, on pg_ts_config using btree(oid oid_ops));
 #define TSConfigOidIndexId	3712
 
-DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops, mapseqno int4_ops));
+DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops));
 #define TSConfigMapIndexId	3609
 
 DECLARE_UNIQUE_INDEX(pg_ts_dict_dictname_index, 3604, on pg_ts_dict using btree(dictname name_ops, dictnamespace oid_ops));
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 40d54ed030..65d6fa841a 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -9023,6 +9023,19 @@
   prorettype => 'regconfig', proargtypes => '',
   prosrc => 'get_current_ts_config' },
 
+{ oid => '8891', descr => 'returns text representation of dictionary configuration map',
+  proname => 'dictionary_mapping_to_text', provolatile => 's',
+  prorettype => 'text', proargtypes => 'regconfig int4',
+  prosrc => 'dictionary_mapping_to_text' },
+
+{ oid => '8892', descr => 'debug function for a text search configuration',
+  proname => 'ts_debug', provolatile => 's',
+  prorettype => 'record', proargtypes => 'regconfig text',
+  proallargtypes => '{regconfig,text,text,text,text,_regdictionary,text,text,_text}',
+  proargmodes => '{i,i,o,o,o,o,o,o,o}',
+  proargnames => '{ftsconfig,inputext,alias,description,token,dictionaries,configuration,command,lexemes}',
+  prosrc => 'ts_debug' },
+
 { oid => '3736', descr => 'I/O',
   proname => 'regconfigin', provolatile => 's', prorettype => 'regconfig',
   proargtypes => 'cstring', prosrc => 'regconfigin' },
diff --git a/src/include/catalog/pg_ts_config_map.dat b/src/include/catalog/pg_ts_config_map.dat
index 097a9f5e6d..16982dfa98 100644
--- a/src/include/catalog/pg_ts_config_map.dat
+++ b/src/include/catalog/pg_ts_config_map.dat
@@ -12,24 +12,24 @@
 
 [
 
-{ mapcfg => '3748', maptokentype => '1', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '2', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '3', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '4', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '5', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '6', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '7', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '8', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '9', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '10', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '11', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '15', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '16', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '17', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '18', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '19', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '20', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '21', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '22', mapseqno => '1', mapdict => '3765' },
+{ mapcfg => '3748', maptokentype => '1', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '2', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '3', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '4', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '5', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '6', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '7', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '8', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '9', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '10', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '11', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '15', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '16', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '17', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '18', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '19', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '20', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '21', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '22', mapdicts => '[3765]' },
 
 ]
diff --git a/src/include/catalog/pg_ts_config_map.h b/src/include/catalog/pg_ts_config_map.h
index 5856323373..9298fa86f1 100644
--- a/src/include/catalog/pg_ts_config_map.h
+++ b/src/include/catalog/pg_ts_config_map.h
@@ -20,6 +20,7 @@
 #define PG_TS_CONFIG_MAP_H
 
 #include "catalog/genbki.h"
+#include "utils/jsonb.h"
 #include "catalog/pg_ts_config_map_d.h"
 
 /* ----------------
@@ -27,14 +28,91 @@
  *		typedef struct FormData_pg_ts_config_map
  * ----------------
  */
+#define TSConfigMapRelationId	3603
+
+/*
+ * Create a typedef in order to use same type name in
+ * generated DB initialization script and C source code
+ */
+typedef Jsonb jsonb;
+
 CATALOG(pg_ts_config_map,3603,TSConfigMapRelationId) BKI_WITHOUT_OIDS
 {
 	Oid			mapcfg;			/* OID of configuration owning this entry */
 	int32		maptokentype;	/* token type from parser */
-	int32		mapseqno;		/* order in which to consult dictionaries */
-	Oid			mapdict;		/* dictionary to consult */
+
+	/*
+	 * mapdicts is the only one variable-length field so it is safe to use
+	 * it directly, without hiding from C interface.
+	 */
+	jsonb		mapdicts;		/* dictionary map Jsonb representation */
 } FormData_pg_ts_config_map;
 
 typedef FormData_pg_ts_config_map *Form_pg_ts_config_map;
 
+/*
+ * Element of the mapping expression tree
+ */
+typedef struct TSMapElement
+{
+	int			type; /* Type of the element */
+	union
+	{
+		struct TSMapExpression *objectExpression;
+		struct TSMapCase *objectCase;
+		Oid			objectDictionary;
+		void	   *object;
+	} value;
+	struct TSMapElement *parent; /* Parent in the expression tree */
+} TSMapElement;
+
+/*
+ * Representation of expression with operator and two operands
+ */
+typedef struct TSMapExpression
+{
+	int			operator;
+	TSMapElement *left;
+	TSMapElement *right;
+} TSMapExpression;
+
+/*
+ * Representation of CASE structure inside database
+ */
+typedef struct TSMapCase
+{
+	TSMapElement *condition;
+	TSMapElement *command;
+	TSMapElement *elsebranch;
+	bool		match;	/* If false, NO MATCH is used */
+} TSMapCase;
+
+/* ----------------
+ *		Compiler constants for pg_ts_config_map
+ * ----------------
+ */
+#define Natts_pg_ts_config_map				3
+#define Anum_pg_ts_config_map_mapcfg		1
+#define Anum_pg_ts_config_map_maptokentype	2
+#define Anum_pg_ts_config_map_mapdicts		3
+
+/* ----------------
+ *		Dictionary map operators
+ * ----------------
+ */
+#define TSMAP_OP_MAP			1
+#define TSMAP_OP_UNION			2
+#define TSMAP_OP_EXCEPT			3
+#define TSMAP_OP_INTERSECT		4
+#define TSMAP_OP_COMMA			5
+
+/* ----------------
+ *		TSMapElement object types
+ * ----------------
+ */
+#define TSMAP_EXPRESSION	1
+#define TSMAP_CASE			2
+#define TSMAP_DICTIONARY	3
+#define TSMAP_KEEP			4
+
 #endif							/* PG_TS_CONFIG_MAP_H */
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 43f1552241..3e115404b4 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -384,6 +384,9 @@ typedef enum NodeTag
 	T_CreateEnumStmt,
 	T_CreateRangeStmt,
 	T_AlterEnumStmt,
+	T_DictMapExprElem,
+	T_DictMapElem,
+	T_DictMapCase,
 	T_AlterTSDictionaryStmt,
 	T_AlterTSConfigurationStmt,
 	T_CreateFdwStmt,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 6390f7e8c1..5f0c33e14f 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3411,6 +3411,50 @@ typedef enum AlterTSConfigType
 	ALTER_TSCONFIG_DROP_MAPPING
 } AlterTSConfigType;
 
+/*
+ * TS Configuration expression tree element's types
+ */
+typedef enum DictMapElemType
+{
+	DICT_MAP_CASE,
+	DICT_MAP_EXPRESSION,
+	DICT_MAP_KEEP,
+	DICT_MAP_DICTIONARY
+} DictMapElemType;
+
+/*
+ * TS Configuration expression tree abstract element
+ */
+typedef struct DictMapElem
+{
+	NodeTag		type;
+	int8		kind;			/* See DictMapElemType */
+	void	   *data;			/* Type should be detected by kind value */
+} DictMapElem;
+
+/*
+ * TS Configuration expression tree element with operator and operands
+ */
+typedef struct DictMapExprElem
+{
+	NodeTag		type;
+	DictMapElem *left;
+	DictMapElem *right;
+	int8		oper;
+} DictMapExprElem;
+
+/*
+ * TS Configuration expression tree CASE element
+ */
+typedef struct DictMapCase
+{
+	NodeTag		type;
+	struct DictMapElem *condition;
+	struct DictMapElem *command;
+	struct DictMapElem *elsebranch;
+	bool		match;
+} DictMapCase;
+
 typedef struct AlterTSConfigurationStmt
 {
 	NodeTag		type;
@@ -3423,6 +3467,7 @@ typedef struct AlterTSConfigurationStmt
 	 */
 	List	   *tokentype;		/* list of Value strings */
 	List	   *dicts;			/* list of list of Value strings */
+	DictMapElem *dict_map;		/* tree of the mapping expression */
 	bool		override;		/* if true - remove old variant */
 	bool		replace;		/* if true - replace dictionary by another */
 	bool		missing_ok;		/* for DROP - skip error if missing? */
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 23db40147b..1f58c319e8 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -221,6 +221,7 @@ PG_KEYWORD("is", IS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isnull", ISNULL, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isolation", ISOLATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("join", JOIN, TYPE_FUNC_NAME_KEYWORD)
+PG_KEYWORD("keep", KEEP, RESERVED_KEYWORD)
 PG_KEYWORD("key", KEY, UNRESERVED_KEYWORD)
 PG_KEYWORD("label", LABEL, UNRESERVED_KEYWORD)
 PG_KEYWORD("language", LANGUAGE, UNRESERVED_KEYWORD)
@@ -243,6 +244,7 @@ PG_KEYWORD("location", LOCATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("lock", LOCK_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("locked", LOCKED, UNRESERVED_KEYWORD)
 PG_KEYWORD("logged", LOGGED, UNRESERVED_KEYWORD)
+PG_KEYWORD("map", MAP, UNRESERVED_KEYWORD)
 PG_KEYWORD("mapping", MAPPING, UNRESERVED_KEYWORD)
 PG_KEYWORD("match", MATCH, UNRESERVED_KEYWORD)
 PG_KEYWORD("materialized", MATERIALIZED, UNRESERVED_KEYWORD)
diff --git a/src/include/tsearch/ts_cache.h b/src/include/tsearch/ts_cache.h
index 410f1d54af..4633dd7618 100644
--- a/src/include/tsearch/ts_cache.h
+++ b/src/include/tsearch/ts_cache.h
@@ -14,6 +14,7 @@
 #define TS_CACHE_H
 
 #include "utils/guc.h"
+#include "catalog/pg_ts_config_map.h"
 
 
 /*
@@ -66,6 +67,7 @@ typedef struct
 {
 	int			len;
 	Oid		   *dictIds;
+	int32	   *dictOptions;
 } ListDictionary;
 
 typedef struct
@@ -77,7 +79,7 @@ typedef struct
 	Oid			prsId;
 
 	int			lenmap;
-	ListDictionary *map;
+	TSMapElement **map;
 } TSConfigCacheEntry;
 
 
diff --git a/src/include/tsearch/ts_configmap.h b/src/include/tsearch/ts_configmap.h
new file mode 100644
index 0000000000..79e618052e
--- /dev/null
+++ b/src/include/tsearch/ts_configmap.h
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.h
+ *	  internal representation of text search configuration and utilities for it
+ *
+ * Copyright (c) 1998-2018, PostgreSQL Global Development Group
+ *
+ * src/include/tsearch/ts_utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PG_TS_CONFIGMAP_H_
+#define _PG_TS_CONFIGMAP_H_
+
+#include "utils/jsonb.h"
+#include "catalog/pg_ts_config_map.h"
+
+/*
+ * Configuration storage functions
+ * Provide interface to convert ts_configuration into JSONB and vice versa
+ */
+
+/* Convert TSMapElement structure into JSONB */
+extern Jsonb *TSMapToJsonb(TSMapElement *config);
+
+/* Extract TSMapElement from JSONB formated data */
+extern TSMapElement *JsonbToTSMap(Jsonb *json);
+/* Replace all occurances of oldDict by newDict */
+extern void TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict);
+
+/* Move rule list into specified memory context */
+extern TSMapElement *TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context);
+/* Free all nodes of the rule list */
+extern void TSMapElementFree(TSMapElement *element);
+
+/* Print map in human-readable format */
+extern void TSMapPrintElement(TSMapElement *config, StringInfo result);
+
+/* Print dictionary name for a given Oid */
+extern void TSMapPrintDictName(Oid dictId, StringInfo result);
+
+/* Return all dictionaries used in config */
+extern Oid *TSMapGetDictionaries(TSMapElement *config);
+
+/* Do a deep comparison of two TSMapElements. Doesn't check parents of elements */
+extern bool TSMapElementEquals(TSMapElement *a, TSMapElement *b);
+
+#endif							/* _PG_TS_CONFIGMAP_H_ */
diff --git a/src/include/tsearch/ts_public.h b/src/include/tsearch/ts_public.h
index 0b7a5aa68e..d970eec0ab 100644
--- a/src/include/tsearch/ts_public.h
+++ b/src/include/tsearch/ts_public.h
@@ -115,6 +115,7 @@ typedef struct
 #define TSL_ADDPOS		0x01
 #define TSL_PREFIX		0x02
 #define TSL_FILTER		0x04
+#define TSL_MULTI		0x08
 
 /*
  * Struct for supporting complex dictionaries like thesaurus.
diff --git a/src/test/regress/expected/oidjoins.out b/src/test/regress/expected/oidjoins.out
index ef268d348e..a398e247c0 100644
--- a/src/test/regress/expected/oidjoins.out
+++ b/src/test/regress/expected/oidjoins.out
@@ -1097,14 +1097,6 @@ WHERE	mapcfg != 0 AND
 ------+--------
 (0 rows)
 
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
- ctid | mapdict 
-------+---------
-(0 rows)
-
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/expected/tsdicts.out b/src/test/regress/expected/tsdicts.out
index 2524ec2768..cfc7579aee 100644
--- a/src/test/regress/expected/tsdicts.out
+++ b/src/test/regress/expected/tsdicts.out
@@ -450,6 +450,105 @@ SELECT ts_lexize('thesaurus', 'one');
  {1}
 (1 row)
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+SELECT to_tsvector('english_union', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'books');
+    to_tsvector     
+--------------------
+ 'book':1 'books':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+SELECT to_tsvector('english_intersect', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'books');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'booking');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+SELECT to_tsvector('english_except', 'book');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_except', 'books');
+ to_tsvector 
+-------------
+ 'books':1
+(1 row)
+
+SELECT to_tsvector('english_except', 'booking');
+ to_tsvector 
+-------------
+ 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+SELECT to_tsvector('english_branches', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -610,6 +709,163 @@ SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a
  'card':3,10 'invit':2,9 'like':6 'look':5 'order':1,8
 (1 row)
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+              to_tsvector              
+---------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                     to_tsvector                     
+-----------------------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5 'supernova':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                              to_tsvector                               
+------------------------------------------------------------------------
+ '1987a':6 'mysterious':2 'of':4 'rings':3 'sn':5 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+            Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |                     Dictionaries                      
+-----------------+-------------------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN MATCH THEN simple UNION thesaurus+
+                 | ELSE simple                                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+      to_tsvector       
+------------------------
+ '12':1 'one':1 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+            to_tsvector            
+-----------------------------------
+ '123':1 'one':1 'three':3 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+           to_tsvector           
+---------------------------------
+ '12':1 'four':3 'one':1 'two':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+      Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |               Dictionaries               
+-----------------+------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN NO MATCH THEN simple+
+                 | ELSE thesaurus                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector    
+------------------
+ '12':1 'books':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector   
+-----------------
+ '12':1 'book':2
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION operators_tst (
+						COPY=thesaurus_tst
+);
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A');
+                                     to_tsvector                                      
+--------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION (synonym, simple);
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A Postgres');
+                                                to_tsvector                                                
+-----------------------------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'pgsql':7 'postgr':7 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/expected/tsearch.out b/src/test/regress/expected/tsearch.out
index b088ff0d4f..9ebf5b9b26 100644
--- a/src/test/regress/expected/tsearch.out
+++ b/src/test/regress/expected/tsearch.out
@@ -36,11 +36,11 @@ WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 -----+---------
 (0 rows)
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
- mapcfg | maptokentype | mapseqno 
---------+--------------+----------
+WHERE mapcfg = 0;
+ mapcfg | maptokentype 
+--------+--------------
 (0 rows)
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
@@ -51,8 +51,8 @@ RIGHT JOIN pg_ts_config_map AS m
     ON (tt.cfgid=m.mapcfg AND tt.tokid=m.maptokentype)
 WHERE
     tt.cfgid IS NULL OR tt.tokid IS NULL;
- cfgid | tokid | mapcfg | maptokentype | mapseqno | mapdict 
--------+-------+--------+--------------+----------+---------
+ cfgid | tokid | mapcfg | maptokentype | mapdicts 
+-------+-------+--------+--------------+----------
 (0 rows)
 
 -- test basic text search behavior without indexes, then with
@@ -567,55 +567,55 @@ SELECT length(to_tsvector('english', '345 qwe@efd.r '' http://www.com/ http://ae
 
 -- ts_debug
 SELECT * from ts_debug('english', '<myns:foo-bar_baz.blurfl>abc&nm1;def&#xa9;ghi&#245;jkl</myns:foo-bar_baz.blurfl>');
-   alias   |   description   |           token            |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+----------------------------+----------------+--------------+---------
- tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |              | 
- asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem | {abc}
- entity    | XML entity      | &nm1;                      | {}             |              | 
- asciiword | Word, all ASCII | def                        | {english_stem} | english_stem | {def}
- entity    | XML entity      | &#xa9;                     | {}             |              | 
- asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem | {ghi}
- entity    | XML entity      | &#245;                     | {}             |              | 
- asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem | {jkl}
- tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |              | 
+   alias   |   description   |           token            |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+----------------------------+----------------+---------------+--------------+---------
+ tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |               |              | 
+ asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem  | english_stem | {abc}
+ entity    | XML entity      | &nm1;                      | {}             |               |              | 
+ asciiword | Word, all ASCII | def                        | {english_stem} | english_stem  | english_stem | {def}
+ entity    | XML entity      | &#xa9;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem  | english_stem | {ghi}
+ entity    | XML entity      | &#245;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem  | english_stem | {jkl}
+ tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |               |              | 
 (9 rows)
 
 -- check parsing of URLs
 SELECT * from ts_debug('english', 'http://www.harewoodsolutions.co.uk/press.aspx</span>');
-  alias   |  description  |                 token                  | dictionaries | dictionary |                 lexemes                  
-----------+---------------+----------------------------------------+--------------+------------+------------------------------------------
- protocol | Protocol head | http://                                | {}           |            | 
- url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple     | {www.harewoodsolutions.co.uk/press.aspx}
- host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple     | {www.harewoodsolutions.co.uk}
- url_path | URL path      | /press.aspx                            | {simple}     | simple     | {/press.aspx}
- tag      | XML tag       | </span>                                | {}           |            | 
+  alias   |  description  |                 token                  | dictionaries | configuration | command |                 lexemes                  
+----------+---------------+----------------------------------------+--------------+---------------+---------+------------------------------------------
+ protocol | Protocol head | http://                                | {}           |               |         | 
+ url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk/press.aspx}
+ host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk}
+ url_path | URL path      | /press.aspx                            | {simple}     | simple        | simple  | {/press.aspx}
+ tag      | XML tag       | </span>                                | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
-  alias   |  description  |           token            | dictionaries | dictionary |           lexemes            
-----------+---------------+----------------------------+--------------+------------+------------------------------
- protocol | Protocol head | http://                    | {}           |            | 
- url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple     | {aew.wer0c.ewr/id?ad=qwe&dw}
- host     | Host          | aew.wer0c.ewr              | {simple}     | simple     | {aew.wer0c.ewr}
- url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple     | {/id?ad=qwe&dw}
- tag      | XML tag       | <span>                     | {}           |            | 
+  alias   |  description  |           token            | dictionaries | configuration | command |           lexemes            
+----------+---------------+----------------------------+--------------+---------------+---------+------------------------------
+ protocol | Protocol head | http://                    | {}           |               |         | 
+ url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple        | simple  | {aew.wer0c.ewr/id?ad=qwe&dw}
+ host     | Host          | aew.wer0c.ewr              | {simple}     | simple        | simple  | {aew.wer0c.ewr}
+ url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple        | simple  | {/id?ad=qwe&dw}
+ tag      | XML tag       | <span>                     | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
-  alias   |  description  |        token         | dictionaries | dictionary |        lexemes         
-----------+---------------+----------------------+--------------+------------+------------------------
- protocol | Protocol head | http://              | {}           |            | 
- url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple     | {5aew.werc.ewr:8100/?}
- host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path      | /?                   | {simple}     | simple     | {/?}
+  alias   |  description  |        token         | dictionaries | configuration | command |        lexemes         
+----------+---------------+----------------------+--------------+---------------+---------+------------------------
+ protocol | Protocol head | http://              | {}           |               |         | 
+ url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?}
+ host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path      | /?                   | {simple}     | simple        | simple  | {/?}
 (4 rows)
 
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
-  alias   | description |         token          | dictionaries | dictionary |         lexemes          
-----------+-------------+------------------------+--------------+------------+--------------------------
- url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple     | {5aew.werc.ewr:8100/?xx}
- host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path    | /?xx                   | {simple}     | simple     | {/?xx}
+  alias   | description |         token          | dictionaries | configuration | command |         lexemes          
+----------+-------------+------------------------+--------------+---------------+---------+--------------------------
+ url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?xx}
+ host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path    | /?xx                   | {simple}     | simple        | simple  | {/?xx}
 (3 rows)
 
 SELECT token, alias,
diff --git a/src/test/regress/sql/oidjoins.sql b/src/test/regress/sql/oidjoins.sql
index c8291d3973..14bea4c758 100644
--- a/src/test/regress/sql/oidjoins.sql
+++ b/src/test/regress/sql/oidjoins.sql
@@ -549,10 +549,6 @@ SELECT	ctid, mapcfg
 FROM	pg_catalog.pg_ts_config_map fk
 WHERE	mapcfg != 0 AND
 	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_config pk WHERE pk.oid = fk.mapcfg);
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/sql/tsdicts.sql b/src/test/regress/sql/tsdicts.sql
index 60906f6549..43203afe61 100644
--- a/src/test/regress/sql/tsdicts.sql
+++ b/src/test/regress/sql/tsdicts.sql
@@ -122,6 +122,57 @@ CREATE TEXT SEARCH DICTIONARY thesaurus (
 
 SELECT ts_lexize('thesaurus', 'one');
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+
+SELECT to_tsvector('english_union', 'book');
+SELECT to_tsvector('english_union', 'books');
+SELECT to_tsvector('english_union', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+
+SELECT to_tsvector('english_intersect', 'book');
+SELECT to_tsvector('english_intersect', 'books');
+SELECT to_tsvector('english_intersect', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+
+SELECT to_tsvector('english_except', 'book');
+SELECT to_tsvector('english_except', 'books');
+SELECT to_tsvector('english_except', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+
+SELECT to_tsvector('english_branches', 'book');
+SELECT to_tsvector('english_branches', 'books');
+SELECT to_tsvector('english_branches', 'booking');
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -194,6 +245,50 @@ SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one');
 SELECT to_tsvector('thesaurus_tst', 'Supernovae star is very new star and usually called supernovae (abbreviation SN)');
 SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a tickets');
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
+CREATE TEXT SEARCH CONFIGURATION operators_tst (
+						COPY=thesaurus_tst
+);
+
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION (synonym, simple);
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A Postgres');
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/sql/tsearch.sql b/src/test/regress/sql/tsearch.sql
index 637bfb3012..26d771b2b5 100644
--- a/src/test/regress/sql/tsearch.sql
+++ b/src/test/regress/sql/tsearch.sql
@@ -26,9 +26,9 @@ SELECT oid, cfgname
 FROM pg_ts_config
 WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
+WHERE mapcfg = 0;
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
 SELECT * FROM
#28Alexander Korotkov
a.korotkov@postgrespro.ru
In reply to: Aleksandr Parfenov (#27)
Re: Flexible configuration for full-text search

Hi, Aleksandr!

On Mon, Jul 9, 2018 at 10:26 AM Aleksandr Parfenov <
a.parfenov@postgrespro.ru> wrote:

A new version of the patch in the attachment. There are no changes since
the last version except refreshing it to current HEAD.

I took a look at this patch. It applied cleanly, but didn't pass
regression tests.

***
/Users/smagen/projects/postgresql/env/master/src/src/test/regress/expected/misc_sanity.out
2018-07-20
13:44:54.000000000 +0300
---
/Users/smagen/projects/postgresql/env/master/src/src/test/regress/results/misc_sanity.out
2018-07-20
13:47:00.000000000 +0300
***************
*** 105,109 ****
pg_index | indpred | pg_node_tree
pg_largeobject | data | bytea
pg_largeobject_metadata | lomacl | aclitem[]
! (11 rows)

--- 105,110 ----
   pg_index                | indpred       | pg_node_tree
   pg_largeobject          | data          | bytea
   pg_largeobject_metadata | lomacl        | aclitem[]
!  pg_ts_config_map        | mapdicts      | jsonb
! (12 rows)

It seems to be related to recent patches which adds toast tables to
majority of system tables with varlena column. Regression diff indicates
that mapdicts field of pg_ts_config_map can't be toasted. I think we
should add toast table to pg_ts_config_map table.

Also, I see that you add extra grammar rules for ALTER TEXT SEARCH
CONFIGURATION to the documentation, which allows specifying config instead
of dictionary_name.

+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ADD MAPPING FOR <replaceable
class="parameter">token_type</replaceable> [, ... ] WITH <replaceable
class="parameter">config</replaceable>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ADD MAPPING FOR <replaceable
class="parameter">token_type</replaceable> [, ... ] WITH <replaceable
class="parameter">dictionary_name</replaceable> [, ... ]
+ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
+    ALTER MAPPING FOR <replaceable
class="parameter">token_type</replaceable> [, ... ] WITH <replaceable
class="parameter">config</replaceable>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING FOR <replaceable
class="parameter">token_type</replaceable> [, ... ] WITH <replaceable
class="parameter">dictionary_name</replaceable> [, ... ]
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>

In the same time you declare config as following.

+   <para>
+    Formally <replaceable class="parameter">config</replaceable> is one of:
+   </para>
+   <programlisting>
+    * dictionary_name
+
+    * config { UNION | INTERSECT | EXCEPT | MAP } config
+
+    * CASE config
+        WHEN [ NO ] MATCH THEN { KEEP | config }
+        [ ELSE config ]
+      END
+   </programlisting>

That is config itself could be a dictionary_name. I think this makes
grammar rules for ALTER TEXT SEARCH CONFIGURATION redundant. We can
specify those rules to always expect config, assuming that it can be
actually a dictionary nay.

+ if (fout->remoteVersion >= 110000)

PostgreSQL 11 already passed feature freeze. Thus, we should be aimed to
PostgreSQL 12.

That's all for now, but I'm going to do more detailed code review.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#29Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Alexander Korotkov (#28)
1 attachment(s)
Re: Flexible configuration for full-text search

Hi Alexander,

Thank you for the feedback!
Fixed version attached to the letter.

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

Attachments:

0001-flexible-fts-configuration-v14.patchtext/x-patch; charset=us-ascii; name=0001-flexible-fts-configuration-v14.patchDownload
diff --git a/contrib/unaccent/expected/unaccent.out b/contrib/unaccent/expected/unaccent.out
index b93105e9c7..37b9337635 100644
--- a/contrib/unaccent/expected/unaccent.out
+++ b/contrib/unaccent/expected/unaccent.out
@@ -61,3 +61,14 @@ SELECT ts_lexize('unaccent', '
  {����}
 (1 row)
 
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
+         to_tsvector          
+------------------------------
+ 'foobar':1 '�����':2 '���':3
+(1 row)
+
diff --git a/contrib/unaccent/sql/unaccent.sql b/contrib/unaccent/sql/unaccent.sql
index 310213994f..6ce21cdfcd 100644
--- a/contrib/unaccent/sql/unaccent.sql
+++ b/contrib/unaccent/sql/unaccent.sql
@@ -2,7 +2,6 @@ CREATE EXTENSION unaccent;
 
 -- must have a UTF8 database
 SELECT getdatabaseencoding();
-
 SET client_encoding TO 'KOI8';
 
 SELECT unaccent('foobar');
@@ -16,3 +15,12 @@ SELECT unaccent('unaccent', '
 SELECT ts_lexize('unaccent', 'foobar');
 SELECT ts_lexize('unaccent', '����');
 SELECT ts_lexize('unaccent', '����');
+
+CREATE TEXT SEARCH CONFIGURATION unaccent(
+						COPY=russian
+);
+
+ALTER TEXT SEARCH CONFIGURATION unaccent ALTER MAPPING FOR
+	asciiword, word WITH unaccent MAP russian_stem;
+
+SELECT to_tsvector('unaccent', 'foobar ����� ����');
diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml
index ebe0b94b27..4ca37b612e 100644
--- a/doc/src/sgml/ref/alter_tsconfig.sgml
+++ b/doc/src/sgml/ref/alter_tsconfig.sgml
@@ -22,9 +22,9 @@ PostgreSQL documentation
  <refsynopsisdiv>
 <synopsis>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
-    ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
+    ADD MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
-    ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">dictionary_name</replaceable> [, ... ]
+    ALTER MAPPING FOR <replaceable class="parameter">token_type</replaceable> [, ... ] WITH <replaceable class="parameter">config</replaceable>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
     ALTER MAPPING REPLACE <replaceable class="parameter">old_dictionary</replaceable> WITH <replaceable class="parameter">new_dictionary</replaceable>
 ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable>
@@ -78,12 +78,12 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
    </varlistentry>
 
    <varlistentry>
-    <term><replaceable class="parameter">dictionary_name</replaceable></term>
+    <term><replaceable class="parameter">config</replaceable></term>
     <listitem>
      <para>
-      The name of a text search dictionary to be consulted for the
-      specified token type(s).  If multiple dictionaries are listed,
-      they are consulted in the specified order.
+      The dictionaries tree expression. The dictionary expression
+      is a triple of condition/command/else that define way to process
+      the text. The <literal>ELSE</literal> part is optional.
      </para>
     </listitem>
    </varlistentry>
@@ -133,7 +133,7 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
      </para>
     </listitem>
    </varlistentry>
- </variablelist>
+  </variablelist>
 
   <para>
    The <literal>ADD MAPPING FOR</literal> form installs a list of dictionaries to be
@@ -154,6 +154,57 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
 
  </refsect1>
 
+ <refsect1>
+  <title>Dictionaries Map Configuration</title>
+
+  <refsect2>
+   <title>Format</title>
+   <para>
+    Formally <replaceable class="parameter">config</replaceable> is one of:
+   </para>
+   <programlisting>
+    * dictionary_name [, ... ]
+
+    * config { UNION | INTERSECT | EXCEPT | MAP } config
+
+    * CASE config
+        WHEN [ NO ] MATCH THEN { KEEP | config }
+        [ ELSE config ]
+      END
+   </programlisting>
+  </refsect2>
+
+  <refsect2>
+   <title>Description</title>
+   <para>
+    <replaceable class="parameter">config</replaceable> can be used
+    in three different formats. The most simple format is name of dictionary to
+    use for tokens processing.
+   </para>
+   <para>
+    <replaceable class="parameter">dictionary_name</replaceable> is a name of
+    a text search dictionary to be consulted.
+   </para>
+   <para>
+    In order to use more than one dictionary
+    simultaneously user should interconnect dictionaries by operators. Operators
+    <literal>UNION</literal>, <literal>EXCEPT</literal> and
+    <literal>INTERSECT</literal> have same meaning as in operations on sets.
+    Special operator <literal>MAP</literal> gets output of left subexpression
+    and uses it as an input to right subexpression.
+   </para>
+   <para>
+    The third format of <replaceable class="parameter">config</replaceable> is similar to
+    <literal>CASE/WHEN/THEN/ELSE</literal> structure. It's consists of three
+    replaceable parts. First one is configuration which is used to construct lexemes set
+    for matching condition. If the condition is triggered, the command is executed.
+    Use command <literal>KEEP</literal> to avoid repeating of the same
+    configuration in condition and command part. However, command may differ from
+    the condition. The <literal>ELSE</literal> branch is executed otherwise.
+   </para>
+  </refsect2>
+ </refsect1>
+
  <refsect1>
   <title>Examples</title>
 
@@ -167,6 +218,34 @@ ALTER TEXT SEARCH CONFIGURATION <replaceable>name</replaceable> SET SCHEMA <repl
 ALTER TEXT SEARCH CONFIGURATION my_config
   ALTER MAPPING REPLACE english WITH swedish;
 </programlisting>
+
+  <para>
+   Next example shows how to analyse documents in both English and German languages.
+   <literal>english_hunspell</literal> and <literal>german_hunspell</literal>
+   return result only if a word is recognized. Otherwise, stemmer dictionaries
+   are used to process a token.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH
+   CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+    UNION
+   CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+    In order to combine search for both exact and processed forms the vector
+    should contain lexemes produced by <literal>simple</literal> for exact form
+    of the word as well as lexemes produced by linguistic-aware dictionary
+    (e.g. <literal>english_stem</literal>) for processed forms.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION my_config
+  ALTER MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
  </refsect1>
 
  <refsect1>
diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml
index 6df424c63e..8c8885c85b 100644
--- a/doc/src/sgml/textsearch.sgml
+++ b/doc/src/sgml/textsearch.sgml
@@ -732,10 +732,11 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     The <function>to_tsvector</function> function internally calls a parser
     which breaks the document text into tokens and assigns a type to
     each token.  For each token, a list of
-    dictionaries (<xref linkend="textsearch-dictionaries"/>) is consulted,
-    where the list can vary depending on the token type.  The first dictionary
-    that <firstterm>recognizes</firstterm> the token emits one or more normalized
-    <firstterm>lexemes</firstterm> to represent the token.  For example,
+    condition/command pairs is consulted, where the list can vary depending
+    on the token type, condition and command are expressions on dictionaries
+    with matching clause in condition(<xref linkend="textsearch-dictionaries"/>).
+    The first command combined with true-resulted condition emits one or more normalized
+    <firstterm>lexemes</firstterm> to represent the token. For example,
     <literal>rats</literal> became <literal>rat</literal> because one of the
     dictionaries recognized that the word <literal>rats</literal> is a plural
     form of <literal>rat</literal>.  Some words are recognized as
@@ -743,7 +744,7 @@ SELECT to_tsvector('english', 'a fat  cat sat on a mat - it ate a fat rats');
     causes them to be ignored since they occur too frequently to be useful in
     searching.  In our example these are
     <literal>a</literal>, <literal>on</literal>, and <literal>it</literal>.
-    If no dictionary in the list recognizes the token then it is also ignored.
+    If none of conditions is <literal>true</literal> the token is ignored.
     In this example that happened to the punctuation sign <literal>-</literal>
     because there are in fact no dictionaries assigned for its token type
     (<literal>Space symbols</literal>), meaning space tokens will never be
@@ -2312,8 +2313,8 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
      <para>
       a single lexeme with the <literal>TSL_FILTER</literal> flag set, to replace
       the original token with a new token to be passed to subsequent
-      dictionaries (a dictionary that does this is called a
-      <firstterm>filtering dictionary</firstterm>)
+      dictionaries in a comma-separated syntax (a dictionary that does this
+      is called a <firstterm>filtering dictionary</firstterm>)
      </para>
     </listitem>
     <listitem>
@@ -2345,38 +2346,126 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
    type that the parser can return, a separate list of dictionaries is
    specified by the configuration.  When a token of that type is found
    by the parser, each dictionary in the list is consulted in turn,
-   until some dictionary recognizes it as a known word.  If it is identified
-   as a stop word, or if no dictionary recognizes the token, it will be
-   discarded and not indexed or searched for.
-   Normally, the first dictionary that returns a non-<literal>NULL</literal>
-   output determines the result, and any remaining dictionaries are not
-   consulted; but a filtering dictionary can replace the given word
-   with a modified word, which is then passed to subsequent dictionaries.
+   until command is not selected based on its condition. If none of cases is
+   selected token will be discarded and not indexed or searched for.
   </para>
 
   <para>
-   The general rule for configuring a list of dictionaries
-   is to place first the most narrow, most specific dictionary, then the more
-   general dictionaries, finishing with a very general dictionary, like
+   A tree of cases is described as condition/command/else triples. Each
+   condition is evaluated in order to select appropriate command to generate
+   resulted set of lexemes.
+  </para>
+
+  <para>
+   A condition is an expression with dictionaries used as operands and
+   basic set operators <literal>UNION</literal>, <literal>EXCEPT</literal>, <literal>INTERSECT</literal>
+   and special operator <literal>MAP</literal>.
+   Special operator <literal>MAP</literal> use output of left subexpression as
+   input for right subexpression.
+  </para>
+
+  <para>
+    Rules to write command are same as for condition with additional keyword
+    <literal>KEEP</literal> considered to use the result of the condition as an output.
+  </para>
+
+  <para>
+   A comma-separated list of dictionaries is a simplified variant of text
+   search configuration. Each dictionary consulted to process a token and first
+   non-<literal>NULL</literal> output is accepted as a processing result.
+  </para>
+
+  <para>
+   The general rule for configuring tokens processing
+   is to place first case with the most narrow, most specific dictionary, then the more
+   general dictionaries, finishing with a very general dictionaries, like
    a <application>Snowball</application> stemmer or <literal>simple</literal>, which
-   recognizes everything.  For example, for an astronomy-specific search
+   recognizes everything. For example, for an astronomy-specific search
    (<literal>astro_en</literal> configuration) one could bind token type
    <type>asciiword</type> (ASCII word) to a synonym dictionary of astronomical
    terms, a general English dictionary and a <application>Snowball</application> English
-   stemmer:
+   stemmer in comma-separated variant of mapping:
+  </para>
 
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION astro_en
     ADD MAPPING FOR asciiword WITH astrosyn, english_ispell, english_stem;
 </programlisting>
+
+  <para>
+   Another example is a configuration for both English and German languages via
+   operator-separated variant of mapping:
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION multi_en_de
+    ADD MAPPING FOR asciiword, word WITH
+        CASE english_hunspell WHEN MATCH THEN KEEP ELSE english_stem END
+         UNION
+        CASE german_hunspell WHEN MATCH THEN KEEP ELSE german_stem END;
+</programlisting>
+
+  <para>
+   This configuration provides an ability to search on collection of multilingual
+   documents without specifying language:
+  </para>
+
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'lack');
+ id |                   txt
+----+-----------------------------------------
+  2 | with old stars and lacking gas and dust
+
+WITH docs(id, txt) as (values (1, 'Das geschah zu Beginn dieses Monats'),
+                              (2, 'with old stars and lacking gas and dust'),
+                              (3, '25 light-years across, blown bywinds from its central'))
+SELECT * FROM docs WHERE to_tsvector('multi_en_de', txt) @@ to_tsquery('multi_en_de', 'beginnen');
+ id |                 txt
+----+-------------------------------------
+  1 | Das geschah zu Beginn dieses Monats
+</programlisting>
+
+  <para>
+   A combination of stemmer dictionary with <literal>simple</literal> one may be used to mix
+   search for exact form of one word and linguistic search for others.
+  </para>
+
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION exact_and_linguistic
+    ADD MAPPING FOR asciiword, word WITH english_stem UNION simple;
+</programlisting>
+
+  <para>
+   In the following example a simple dictionary is used to prevent words from normalization in query.
   </para>
 
+<programlisting>
+WITH docs(id, txt) as (values (1, 'Supernova star'),
+                              (2, 'Supernova stars'))
+SELECT * FROM docs WHERE to_tsvector('exact_and_linguistic', txt) @@ (to_tsquery('simple', 'stars') &amp;&amp; to_tsquery('english', 'supernovae'));
+ id |       txt       
+----+-----------------
+  2 | Supernova stars
+</programlisting>
+
+   <caution>
+    <para>
+     Due to lack of information about origin of each lexeme in <literal>tsvector</literal> may
+     lead to false-positive triggers in case of stemmed form being used as exact form in a query.
+    </para>
+   </caution>
+
   <para>
-   A filtering dictionary can be placed anywhere in the list, except at the
-   end where it'd be useless.  Filtering dictionaries are useful to partially
+   Filtering dictionaries are useful to partially
    normalize words to simplify the task of later dictionaries.  For example,
    a filtering dictionary could be used to remove accents from accented
    letters, as is done by the <xref linkend="unaccent"/> module.
+   Filter dictionary should be placed at left of <literal>MAP</literal>
+   operator. If filter dictionary returns <literal>NULL</literal> it pass initial token
+   to the right subexpression.
   </para>
 
   <sect2 id="textsearch-stopwords">
@@ -2543,9 +2632,9 @@ SELECT ts_lexize('public.simple_dict','The');
 
 <screen>
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | Paris | {english_stem} | english_stem  | english_stem | {pari}
 
 CREATE TEXT SEARCH DICTIONARY my_synonym (
     TEMPLATE = synonym,
@@ -2557,9 +2646,12 @@ ALTER TEXT SEARCH CONFIGURATION english
     WITH my_synonym, english_stem;
 
 SELECT * FROM ts_debug('english', 'Paris');
-   alias   |   description   | token |       dictionaries        | dictionary | lexemes 
------------+-----------------+-------+---------------------------+------------+---------
- asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris}
+   alias   |   description   | token |       dictionaries        |                configuration                |  command   | lexemes 
+-----------+-----------------+-------+---------------------------+---------------------------------------------+------------+---------
+ asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | CASE my_synonym WHEN MATCH THEN KEEP       +| my_synonym | {paris}
+           |                 |       |                           | ELSE CASE english_stem WHEN MATCH THEN KEEP+|            | 
+           |                 |       |                           | END                                        +|            | 
+           |                 |       |                           | END                                         |            | 
 </screen>
    </para>
 
@@ -3184,6 +3276,21 @@ CREATE TEXT SEARCH DICTIONARY english_ispell (
     Now we can set up the mappings for words in configuration
     <literal>pg</literal>:
 
+<programlisting>
+ALTER TEXT SEARCH CONFIGURATION pg
+    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
+                      word, hword, hword_part
+    WITH 
+      CASE pg_dict WHEN MATCH THEN KEEP
+      ELSE
+          CASE english_ispell WHEN MATCH THEN KEEP
+          ELSE english_stem
+          END
+      END;
+</programlisting>
+
+    Or use alternative comma-separated syntax:
+
 <programlisting>
 ALTER TEXT SEARCH CONFIGURATION pg
     ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
@@ -3263,7 +3370,8 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
          OUT <replaceable class="parameter">description</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">token</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">dictionaries</replaceable> <type>regdictionary[]</type>,
-         OUT <replaceable class="parameter">dictionary</replaceable> <type>regdictionary</type>,
+         OUT <replaceable class="parameter">configuration</replaceable> <type>text</type>,
+         OUT <replaceable class="parameter">command</replaceable> <type>text</type>,
          OUT <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)
          returns setof record
 </synopsis>
@@ -3307,14 +3415,20 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
      </listitem>
      <listitem>
       <para>
-       <replaceable>dictionary</replaceable> <type>regdictionary</type> &mdash; the dictionary
-       that recognized the token, or <literal>NULL</literal> if none did
+       <replaceable>configuration</replaceable> <type>text</type> &mdash; the
+       configuration defined for this token type
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       <replaceable>command</replaceable> <type>text</type> &mdash; the command that describes
+       the way the output was produced
       </para>
      </listitem>
      <listitem>
       <para>
        <replaceable>lexemes</replaceable> <type>text[]</type> &mdash; the lexeme(s) produced
-       by the dictionary that recognized the token, or <literal>NULL</literal> if
+       by the command selected according conditions, or <literal>NULL</literal> if
        none did; an empty array (<literal>{}</literal>) means it was recognized as a
        stop word
       </para>
@@ -3327,32 +3441,32 @@ ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>re
 
 <screen>
 SELECT * FROM ts_debug('english','a fat  cat sat on a mat - it ate a fat rats');
-   alias   |   description   | token |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+-------+----------------+--------------+---------
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | cat   | {english_stem} | english_stem | {cat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | sat   | {english_stem} | english_stem | {sat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | on    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | mat   | {english_stem} | english_stem | {mat}
- blank     | Space symbols   |       | {}             |              | 
- blank     | Space symbols   | -     | {}             |              | 
- asciiword | Word, all ASCII | it    | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | ate   | {english_stem} | english_stem | {ate}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | a     | {english_stem} | english_stem | {}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | fat   | {english_stem} | english_stem | {fat}
- blank     | Space symbols   |       | {}             |              | 
- asciiword | Word, all ASCII | rats  | {english_stem} | english_stem | {rat}
+   alias   |   description   | token |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+-------+----------------+---------------+--------------+---------
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | cat   | {english_stem} | english_stem  | english_stem | {cat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | sat   | {english_stem} | english_stem  | english_stem | {sat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | on    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | mat   | {english_stem} | english_stem  | english_stem | {mat}
+ blank     | Space symbols   |       |                |               |              | 
+ blank     | Space symbols   | -     |                |               |              | 
+ asciiword | Word, all ASCII | it    | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | ate   | {english_stem} | english_stem  | english_stem | {ate}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | a     | {english_stem} | english_stem  | english_stem | {}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | fat   | {english_stem} | english_stem  | english_stem | {fat}
+ blank     | Space symbols   |       |                |               |              | 
+ asciiword | Word, all ASCII | rats  | {english_stem} | english_stem  | english_stem | {rat}
 </screen>
   </para>
 
@@ -3378,13 +3492,22 @@ ALTER TEXT SEARCH CONFIGURATION public.english
 
 <screen>
 SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
-   alias   |   description   |    token    |         dictionaries          |   dictionary   |   lexemes   
------------+-----------------+-------------+-------------------------------+----------------+-------------
- asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | english_ispell | {}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | english_ispell | {bright}
- blank     | Space symbols   |             | {}                            |                | 
- asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | english_stem   | {supernova}
+   alias   |   description   |    token    |         dictionaries          |                configuration                |     command      |   lexemes   
+-----------+-----------------+-------------+-------------------------------+---------------------------------------------+------------------+-------------
+ asciiword | Word, all ASCII | The         | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | Brightest   | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_ispell   | {bright}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
+ blank     | Space symbols   |             |                               |                                             |                  | 
+ asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | CASE english_ispell WHEN MATCH THEN KEEP   +| english_stem     | {supernova}
+           |                 |             |                               | ELSE CASE english_stem WHEN MATCH THEN KEEP+|                  | 
+           |                 |             |                               | END                                        +|                  | 
+           |                 |             |                               | END                                         |                  | 
 </screen>
 
   <para>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 7251552419..deb8b25108 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -948,55 +948,14 @@ GRANT SELECT (subdbid, subname, subowner, subenabled, subslotname, subpublicatio
 -- Tsearch debug function.  Defined here because it'd be pretty unwieldy
 -- to put it into pg_proc.h
 
-CREATE FUNCTION ts_debug(IN config regconfig, IN document text,
-    OUT alias text,
-    OUT description text,
-    OUT token text,
-    OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
-    OUT lexemes text[])
-RETURNS SETOF record AS
-$$
-SELECT
-    tt.alias AS alias,
-    tt.description AS description,
-    parse.token AS token,
-    ARRAY ( SELECT m.mapdict::pg_catalog.regdictionary
-            FROM pg_catalog.pg_ts_config_map AS m
-            WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-            ORDER BY m.mapseqno )
-    AS dictionaries,
-    ( SELECT mapdict::pg_catalog.regdictionary
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS dictionary,
-    ( SELECT pg_catalog.ts_lexize(mapdict, parse.token)
-      FROM pg_catalog.pg_ts_config_map AS m
-      WHERE m.mapcfg = $1 AND m.maptokentype = parse.tokid
-      ORDER BY pg_catalog.ts_lexize(mapdict, parse.token) IS NULL, m.mapseqno
-      LIMIT 1
-    ) AS lexemes
-FROM pg_catalog.ts_parse(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 ), $2
-    ) AS parse,
-     pg_catalog.ts_token_type(
-        (SELECT cfgparser FROM pg_catalog.pg_ts_config WHERE oid = $1 )
-    ) AS tt
-WHERE tt.tokid = parse.tokid
-$$
-LANGUAGE SQL STRICT STABLE PARALLEL SAFE;
-
-COMMENT ON FUNCTION ts_debug(regconfig,text) IS
-    'debug function for text search configuration';
 
 CREATE FUNCTION ts_debug(IN document text,
     OUT alias text,
     OUT description text,
     OUT token text,
     OUT dictionaries regdictionary[],
-    OUT dictionary regdictionary,
+    OUT configuration text,
+    OUT command text,
     OUT lexemes text[])
 RETURNS SETOF record AS
 $$
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index 3a843512d1..53ee576223 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -39,9 +39,12 @@
 #include "nodes/makefuncs.h"
 #include "parser/parse_func.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_public.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
+#include "utils/jsonb.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "utils/syscache.h"
@@ -935,11 +938,22 @@ makeConfigurationDependencies(HeapTuple tuple, bool removeOld,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			TSMapElement *mapdicts = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			Oid		   *dictionaryOids = TSMapGetDictionaries(mapdicts);
+			Oid		   *currentOid = dictionaryOids;
 
-			referenced.classId = TSDictionaryRelationId;
-			referenced.objectId = cfgmap->mapdict;
-			referenced.objectSubId = 0;
-			add_exact_object_address(&referenced, addrs);
+			while (*currentOid != InvalidOid)
+			{
+				referenced.classId = TSDictionaryRelationId;
+				referenced.objectId = *currentOid;
+				referenced.objectSubId = 0;
+				add_exact_object_address(&referenced, addrs);
+
+				currentOid++;
+			}
+
+			pfree(dictionaryOids);
+			TSMapElementFree(mapdicts);
 		}
 
 		systable_endscan(scan);
@@ -1091,8 +1105,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			mapvalues[Anum_pg_ts_config_map_mapcfg - 1] = cfgOid;
 			mapvalues[Anum_pg_ts_config_map_maptokentype - 1] = cfgmap->maptokentype;
-			mapvalues[Anum_pg_ts_config_map_mapseqno - 1] = cfgmap->mapseqno;
-			mapvalues[Anum_pg_ts_config_map_mapdict - 1] = cfgmap->mapdict;
+			mapvalues[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(&cfgmap->mapdicts);
 
 			newmaptup = heap_form_tuple(mapRel->rd_att, mapvalues, mapnulls);
 
@@ -1195,7 +1208,7 @@ AlterTSConfiguration(AlterTSConfigurationStmt *stmt)
 	relMap = heap_open(TSConfigMapRelationId, RowExclusiveLock);
 
 	/* Add or drop mappings */
-	if (stmt->dicts)
+	if (stmt->dicts || stmt->dict_map)
 		MakeConfigurationMapping(stmt, tup, relMap);
 	else if (stmt->tokentype)
 		DropConfigurationMapping(stmt, tup, relMap);
@@ -1270,6 +1283,59 @@ getTokenTypes(Oid prsId, List *tokennames)
 	return res;
 }
 
+/*
+ * Parse parse node extracted from dictionary mapping and transform it into
+ * internal representation of dictionary mapping.
+ */
+static TSMapElement *
+ParseTSMapConfig(DictMapElem *elem)
+{
+	TSMapElement *result = palloc0(sizeof(TSMapElement));
+
+	if (elem->kind == DICT_MAP_CASE)
+	{
+		TSMapCase  *caseObject = palloc0(sizeof(TSMapCase));
+		DictMapCase *caseASTObject = elem->data;
+
+		caseObject->condition = ParseTSMapConfig(caseASTObject->condition);
+		caseObject->command = ParseTSMapConfig(caseASTObject->command);
+
+		if (caseASTObject->elsebranch)
+			caseObject->elsebranch = ParseTSMapConfig(caseASTObject->elsebranch);
+
+		caseObject->match = caseASTObject->match;
+
+		caseObject->condition->parent = result;
+		caseObject->command->parent = result;
+
+		result->type = TSMAP_CASE;
+		result->value.objectCase = caseObject;
+	}
+	else if (elem->kind == DICT_MAP_EXPRESSION)
+	{
+		TSMapExpression *expression = palloc0(sizeof(TSMapExpression));
+		DictMapExprElem *expressionAST = elem->data;
+
+		expression->left = ParseTSMapConfig(expressionAST->left);
+		expression->right = ParseTSMapConfig(expressionAST->right);
+		expression->operator = expressionAST->oper;
+
+		result->type = TSMAP_EXPRESSION;
+		result->value.objectExpression = expression;
+	}
+	else if (elem->kind == DICT_MAP_KEEP)
+	{
+		result->value.objectExpression = NULL;
+		result->type = TSMAP_KEEP;
+	}
+	else if (elem->kind == DICT_MAP_DICTIONARY)
+	{
+		result->value.objectDictionary = get_ts_dict_oid(elem->data, false);
+		result->type = TSMAP_DICTIONARY;
+	}
+	return result;
+}
+
 /*
  * ALTER TEXT SEARCH CONFIGURATION ADD/ALTER MAPPING
  */
@@ -1286,8 +1352,9 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	Oid			prsId;
 	int		   *tokens,
 				ntoken;
-	Oid		   *dictIds;
-	int			ndict;
+	Oid		   *dictIds = NULL;
+	int			ndict = 0;
+	TSMapElement *config = NULL;
 	ListCell   *c;
 
 	prsId = ((Form_pg_ts_config) GETSTRUCT(tup))->cfgparser;
@@ -1326,15 +1393,18 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 	/*
 	 * Convert list of dictionary names to array of dict OIDs
 	 */
-	ndict = list_length(stmt->dicts);
-	dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
-	i = 0;
-	foreach(c, stmt->dicts)
+	if (stmt->dicts)
 	{
-		List	   *names = (List *) lfirst(c);
+		ndict = list_length(stmt->dicts);
+		dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
+		i = 0;
+		foreach(c, stmt->dicts)
+		{
+			List	   *names = (List *) lfirst(c);
 
-		dictIds[i] = get_ts_dict_oid(names, false);
-		i++;
+			dictIds[i] = get_ts_dict_oid(names, false);
+			i++;
+		}
 	}
 
 	if (stmt->replace)
@@ -1356,6 +1426,10 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		while (HeapTupleIsValid((maptup = systable_getnext(scan))))
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
+			Datum		repl_val[Natts_pg_ts_config_map];
+			bool		repl_null[Natts_pg_ts_config_map];
+			bool		repl_repl[Natts_pg_ts_config_map];
+			HeapTuple	newtup;
 
 			/*
 			 * check if it's one of target token types
@@ -1379,25 +1453,21 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 			/*
 			 * replace dictionary if match
 			 */
-			if (cfgmap->mapdict == dictOld)
-			{
-				Datum		repl_val[Natts_pg_ts_config_map];
-				bool		repl_null[Natts_pg_ts_config_map];
-				bool		repl_repl[Natts_pg_ts_config_map];
-				HeapTuple	newtup;
-
-				memset(repl_val, 0, sizeof(repl_val));
-				memset(repl_null, false, sizeof(repl_null));
-				memset(repl_repl, false, sizeof(repl_repl));
-
-				repl_val[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictNew);
-				repl_repl[Anum_pg_ts_config_map_mapdict - 1] = true;
-
-				newtup = heap_modify_tuple(maptup,
-										   RelationGetDescr(relMap),
-										   repl_val, repl_null, repl_repl);
-				CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
-			}
+			config = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			TSMapReplaceDictionary(config, dictOld, dictNew);
+
+			memset(repl_val, 0, sizeof(repl_val));
+			memset(repl_null, false, sizeof(repl_null));
+			memset(repl_repl, false, sizeof(repl_repl));
+
+			repl_val[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
+			repl_repl[Anum_pg_ts_config_map_mapdicts - 1] = true;
+
+			newtup = heap_modify_tuple(maptup,
+									   RelationGetDescr(relMap),
+									   repl_val, repl_null, repl_repl);
+			CatalogTupleUpdate(relMap, &newtup->t_self, newtup);
+			pfree(config);
 		}
 
 		systable_endscan(scan);
@@ -1407,24 +1477,22 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 		/*
 		 * Insertion of new entries
 		 */
+		config = ParseTSMapConfig(stmt->dict_map);
+
 		for (i = 0; i < ntoken; i++)
 		{
-			for (j = 0; j < ndict; j++)
-			{
-				Datum		values[Natts_pg_ts_config_map];
-				bool		nulls[Natts_pg_ts_config_map];
+			Datum		values[Natts_pg_ts_config_map];
+			bool		nulls[Natts_pg_ts_config_map];
 
-				memset(nulls, false, sizeof(nulls));
-				values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
-				values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
-				values[Anum_pg_ts_config_map_mapseqno - 1] = Int32GetDatum(j + 1);
-				values[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictIds[j]);
+			memset(nulls, false, sizeof(nulls));
+			values[Anum_pg_ts_config_map_mapcfg - 1] = ObjectIdGetDatum(cfgId);
+			values[Anum_pg_ts_config_map_maptokentype - 1] = Int32GetDatum(tokens[i]);
+			values[Anum_pg_ts_config_map_mapdicts - 1] = JsonbPGetDatum(TSMapToJsonb(config));
 
-				tup = heap_form_tuple(relMap->rd_att, values, nulls);
-				CatalogTupleInsert(relMap, tup);
+			tup = heap_form_tuple(relMap->rd_att, values, nulls);
+			CatalogTupleInsert(relMap, tup);
 
-				heap_freetuple(tup);
-			}
+			heap_freetuple(tup);
 		}
 	}
 
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 96836ef19c..56bebae788 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4445,6 +4445,42 @@ _copyReassignOwnedStmt(const ReassignOwnedStmt *from)
 	return newnode;
 }
 
+static DictMapElem *
+_copyDictMapElem(const DictMapElem *from)
+{
+	DictMapElem *newnode = makeNode(DictMapElem);
+
+	COPY_SCALAR_FIELD(kind);
+	COPY_NODE_FIELD(data);
+
+	return newnode;
+}
+
+static DictMapExprElem *
+_copyDictMapExprElem(const DictMapExprElem *from)
+{
+	DictMapExprElem *newnode = makeNode(DictMapExprElem);
+
+	COPY_NODE_FIELD(left);
+	COPY_NODE_FIELD(right);
+	COPY_SCALAR_FIELD(oper);
+
+	return newnode;
+}
+
+static DictMapCase *
+_copyDictMapCase(const DictMapCase *from)
+{
+	DictMapCase *newnode = makeNode(DictMapCase);
+
+	COPY_NODE_FIELD(condition);
+	COPY_NODE_FIELD(command);
+	COPY_NODE_FIELD(elsebranch);
+	COPY_SCALAR_FIELD(match);
+
+	return newnode;
+}
+
 static AlterTSDictionaryStmt *
 _copyAlterTSDictionaryStmt(const AlterTSDictionaryStmt *from)
 {
@@ -5458,6 +5494,15 @@ copyObjectImpl(const void *from)
 		case T_ReassignOwnedStmt:
 			retval = _copyReassignOwnedStmt(from);
 			break;
+		case T_DictMapExprElem:
+			retval = _copyDictMapExprElem(from);
+			break;
+		case T_DictMapElem:
+			retval = _copyDictMapElem(from);
+			break;
+		case T_DictMapCase:
+			retval = _copyDictMapCase(from);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _copyAlterTSDictionaryStmt(from);
 			break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 6a971d0141..ac642f64ab 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2181,6 +2181,36 @@ _equalReassignOwnedStmt(const ReassignOwnedStmt *a, const ReassignOwnedStmt *b)
 	return true;
 }
 
+static bool
+_equalDictMapElem(const DictMapElem *a, const DictMapElem *b)
+{
+	COMPARE_NODE_FIELD(data);
+	COMPARE_SCALAR_FIELD(kind);
+
+	return true;
+}
+
+static bool
+_equalDictMapExprElem(const DictMapExprElem *a, const DictMapExprElem *b)
+{
+	COMPARE_NODE_FIELD(left);
+	COMPARE_NODE_FIELD(right);
+	COMPARE_SCALAR_FIELD(oper);
+
+	return true;
+}
+
+static bool
+_equalDictMapCase(const DictMapCase *a, const DictMapCase *b)
+{
+	COMPARE_NODE_FIELD(condition);
+	COMPARE_NODE_FIELD(command);
+	COMPARE_NODE_FIELD(elsebranch);
+	COMPARE_SCALAR_FIELD(match);
+
+	return true;
+}
+
 static bool
 _equalAlterTSDictionaryStmt(const AlterTSDictionaryStmt *a, const AlterTSDictionaryStmt *b)
 {
@@ -3531,6 +3561,15 @@ equal(const void *a, const void *b)
 		case T_ReassignOwnedStmt:
 			retval = _equalReassignOwnedStmt(a, b);
 			break;
+		case T_DictMapExprElem:
+			retval = _equalDictMapExprElem(a, b);
+			break;
+		case T_DictMapElem:
+			retval = _equalDictMapElem(a, b);
+			break;
+		case T_DictMapCase:
+			retval = _equalDictMapCase(a, b);
+			break;
 		case T_AlterTSDictionaryStmt:
 			retval = _equalAlterTSDictionaryStmt(a, b);
 			break;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 90dfac2cb1..fd2ef8def7 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -52,6 +52,7 @@
 #include "catalog/namespace.h"
 #include "catalog/pg_am.h"
 #include "catalog/pg_trigger.h"
+#include "catalog/pg_ts_config_map.h"
 #include "commands/defrem.h"
 #include "commands/trigger.h"
 #include "nodes/makefuncs.h"
@@ -241,6 +242,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	PartitionSpec		*partspec;
 	PartitionBoundSpec	*partboundspec;
 	RoleSpec			*rolespec;
+	DictMapElem			*dmapelem;
 }
 
 %type <node>	stmt schema_stmt
@@ -309,7 +311,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 				analyze_option_list analyze_option_elem
 %type <boolean>	opt_or_replace
 				opt_grant_grant_option opt_grant_admin_option
-				opt_nowait opt_if_exists opt_with_data
+				opt_nowait opt_if_exists opt_with_data opt_dictionary_map_no
 %type <ival>	opt_nowait_or_skip
 
 %type <list>	OptRoleList AlterOptRoleList
@@ -584,6 +586,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>		partbound_datum PartitionRangeDatum
 %type <list>		hash_partbound partbound_datum_list range_datum_list
 %type <defelt>		hash_partbound_elem
+%type <ival>		dictionary_map_set_expr_operator
+%type <dmapelem>	dictionary_map_dict dictionary_map_command_expr_paren
+					dictionary_config dictionary_map_case
+					dictionary_map_action opt_dictionary_map_case_else
+					dictionary_config_comma
 
 /*
  * Non-keyword token types.  These are hard-wired into the "flex" lexer.
@@ -646,13 +653,14 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	JOIN
 
-	KEY
+	KEEP KEY
 
 	LABEL LANGUAGE LARGE_P LAST_P LATERAL_P
 	LEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL
 	LOCALTIME LOCALTIMESTAMP LOCATION LOCK_P LOCKED LOGGED
 
-	MAPPING MATCH MATERIALIZED MAXVALUE METHOD MINUTE_P MINVALUE MODE MONTH_P MOVE
+	MAP MAPPING MATCH MATCHED MATERIALIZED MAXVALUE MERGE METHOD
+	MINUTE_P MINVALUE MODE MONTH_P MOVE
 
 	NAME_P NAMES NATIONAL NATURAL NCHAR NEW NEXT NO NONE
 	NOT NOTHING NOTIFY NOTNULL NOWAIT NULL_P NULLIF
@@ -10370,24 +10378,26 @@ AlterTSDictionaryStmt:
 		;
 
 AlterTSConfigurationStmt:
-			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with any_name_list
+			ALTER TEXT_P SEARCH CONFIGURATION any_name ADD_P MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ADD_MAPPING;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = false;
 					n->replace = false;
 					$$ = (Node*)n;
 				}
-			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with any_name_list
+			| ALTER TEXT_P SEARCH CONFIGURATION any_name ALTER MAPPING FOR name_list any_with dictionary_config
 				{
 					AlterTSConfigurationStmt *n = makeNode(AlterTSConfigurationStmt);
 					n->kind = ALTER_TSCONFIG_ALTER_MAPPING_FOR_TOKEN;
 					n->cfgname = $5;
 					n->tokentype = $9;
-					n->dicts = $11;
+					n->dict_map = $11;
+					n->dicts = NULL;
 					n->override = true;
 					n->replace = false;
 					$$ = (Node*)n;
@@ -10439,6 +10449,100 @@ any_with:	WITH									{}
 			| WITH_LA								{}
 		;
 
+opt_dictionary_map_no:
+			NO { $$ = true; }
+			| { $$ = false; }
+		;
+
+dictionary_config_comma:
+			dictionary_map_dict { $$ = $1; }
+			| dictionary_map_dict ',' dictionary_config_comma
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = TSMAP_OP_COMMA;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_action:
+			KEEP
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_KEEP;
+				n->data = NULL;
+				$$ = n;
+			}
+			| dictionary_config { $$ = $1; }
+		;
+
+opt_dictionary_map_case_else:
+			ELSE dictionary_config { $$ = $2; }
+			| { $$ = NULL; }
+		;
+
+dictionary_map_case:
+			CASE dictionary_config WHEN opt_dictionary_map_no MATCH THEN dictionary_map_action opt_dictionary_map_case_else END_P
+			{
+				DictMapCase *n = makeNode(DictMapCase);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->condition = $2;
+				n->command = $7;
+				n->elsebranch = $8;
+				n->match = !$4;
+
+				r->kind = DICT_MAP_CASE;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_set_expr_operator:
+			UNION { $$ = TSMAP_OP_UNION; }
+			| EXCEPT { $$ = TSMAP_OP_EXCEPT; }
+			| INTERSECT { $$ = TSMAP_OP_INTERSECT; }
+			| MAP { $$ = TSMAP_OP_MAP; }
+		;
+
+dictionary_config:
+			dictionary_map_command_expr_paren { $$ = $1; }
+			| dictionary_config dictionary_map_set_expr_operator dictionary_map_command_expr_paren
+			{
+				DictMapExprElem *n = makeNode(DictMapExprElem);
+				DictMapElem *r = makeNode(DictMapElem);
+
+				n->left = $1;
+				n->oper = $2;
+				n->right = $3;
+
+				r->kind = DICT_MAP_EXPRESSION;
+				r->data = n;
+				$$ = r;
+			}
+		;
+
+dictionary_map_command_expr_paren:
+			'(' dictionary_config ')'	{ $$ = $2; }
+			| dictionary_map_case			{ $$ = $1; }
+			| dictionary_config_comma		{ $$ = $1; }
+		;
+
+dictionary_map_dict:
+			any_name
+			{
+				DictMapElem *n = makeNode(DictMapElem);
+				n->kind = DICT_MAP_DICTIONARY;
+				n->data = $1;
+				$$ = n;
+			}
+		;
 
 /*****************************************************************************
  *
@@ -15129,6 +15233,7 @@ unreserved_keyword:
 			| LOCK_P
 			| LOCKED
 			| LOGGED
+			| MAP
 			| MAPPING
 			| MATCH
 			| MATERIALIZED
@@ -15435,6 +15540,7 @@ reserved_keyword:
 			| INITIALLY
 			| INTERSECT
 			| INTO
+			| KEEP
 			| LATERAL_P
 			| LEADING
 			| LIMIT
diff --git a/src/backend/tsearch/Makefile b/src/backend/tsearch/Makefile
index 227468ae9e..e61ad4fa1d 100644
--- a/src/backend/tsearch/Makefile
+++ b/src/backend/tsearch/Makefile
@@ -26,7 +26,7 @@ DICTFILES_PATH=$(addprefix dicts/,$(DICTFILES))
 OBJS = ts_locale.o ts_parse.o wparser.o wparser_def.o dict.o \
 	dict_simple.o dict_synonym.o dict_thesaurus.o \
 	dict_ispell.o regis.o spell.o \
-	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o
+	to_tsany.o ts_selfuncs.o ts_typanalyze.o ts_utils.o ts_configmap.o
 
 include $(top_srcdir)/src/backend/common.mk
 
diff --git a/src/backend/tsearch/ts_configmap.c b/src/backend/tsearch/ts_configmap.c
new file mode 100644
index 0000000000..714f2a8ab2
--- /dev/null
+++ b/src/backend/tsearch/ts_configmap.c
@@ -0,0 +1,1114 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.c
+ *		internal representation of text search configuration and utilities for it
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/tsearch/ts_confimap.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include <ctype.h>
+
+#include "access/heapam.h"
+#include "access/genam.h"
+#include "access/htup_details.h"
+#include "access/sysattr.h"
+#include "catalog/indexing.h"
+#include "catalog/pg_ts_dict.h"
+#include "catalog/pg_namespace.h"
+#include "catalog/namespace.h"
+#include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "utils/fmgroids.h"
+
+/*
+ * Size selected arbitrary, based on assumption that 1024 frames of stack
+ * is enough for parsing of configurations
+ */
+#define JSONB_PARSE_STATE_STACK_SIZE 1024
+
+/*
+ * Used during the parsing of TSMapElement from JSONB into internal
+ * data structures.
+ */
+typedef enum TSMapParseState
+{
+	TSMPS_WAIT_ELEMENT,
+	TSMPS_READ_DICT_OID,
+	TSMPS_READ_COMPLEX_OBJ,
+	TSMPS_READ_EXPRESSION,
+	TSMPS_READ_CASE,
+	TSMPS_READ_OPERATOR,
+	TSMPS_READ_COMMAND,
+	TSMPS_READ_CONDITION,
+	TSMPS_READ_ELSEBRANCH,
+	TSMPS_READ_MATCH,
+	TSMPS_READ_KEEP,
+	TSMPS_READ_LEFT,
+	TSMPS_READ_RIGHT
+} TSMapParseState;
+
+/*
+ * Context used during JSONB parsing to construct a TSMap
+ */
+typedef struct TSMapJsonbParseData
+{
+	TSMapParseState states[JSONB_PARSE_STATE_STACK_SIZE];	/* Stack of states of
+															 * JSONB parsing
+															 * automaton */
+	int			statesIndex;	/* Index of current stack frame */
+	TSMapElement *element;		/* Element that is in construction now */
+} TSMapJsonbParseData;
+
+static JsonbValue *TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState);
+static TSMapElement * JsonbToTSMapElement(JsonbContainer *root);
+
+/*
+ * Print name of the namespace into StringInfo variable result
+ */
+static void
+TSMapPrintNamespace(Oid  namespaceId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_namespace namespace;
+
+	if (false)
+		return;
+
+	maprel = heap_open(NamespaceRelationId, AccessShareLock);
+	mapidx = index_open(NamespaceOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(namespaceId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	namespace = (Form_pg_namespace) GETSTRUCT(maptup);
+	appendStringInfoString(result, namespace->nspname.data);
+	appendStringInfoChar(result, '.');
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print name of the dictionary into StringInfo variable result
+ */
+void
+TSMapPrintDictName(Oid dictId, StringInfo result)
+{
+	Relation	maprel;
+	Relation	mapidx;
+	ScanKeyData mapskey;
+	SysScanDesc mapscan;
+	HeapTuple	maptup;
+	Form_pg_ts_dict dict;
+
+	if (false)
+		return;
+maprel = heap_open(TSDictionaryRelationId, AccessShareLock);
+	mapidx = index_open(TSDictionaryOidIndexId, AccessShareLock);
+
+	ScanKeyInit(&mapskey, ObjectIdAttributeNumber,
+				BTEqualStrategyNumber, F_OIDEQ,
+				ObjectIdGetDatum(dictId));
+	mapscan = systable_beginscan_ordered(maprel, mapidx,
+										 NULL, 1, &mapskey);
+
+	maptup = systable_getnext_ordered(mapscan, ForwardScanDirection);
+	dict = (Form_pg_ts_dict) GETSTRUCT(maptup);
+	if (!TSDictionaryIsVisible(dictId))
+	{
+		TSMapPrintNamespace(dict->dictnamespace, result);
+	}
+	appendStringInfoString(result, dict->dictname.data);
+
+	systable_endscan_ordered(mapscan);
+	index_close(mapidx, AccessShareLock);
+	heap_close(maprel, AccessShareLock);
+}
+
+/*
+ * Print the expression into StringInfo variable result
+ */
+static void
+TSMapPrintExpression(TSMapExpression *expression, StringInfo result)
+{
+
+	Assert(expression->left);
+	if (expression->left->type == TSMAP_EXPRESSION &&
+		expression->left->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, '(');
+	}
+	TSMapPrintElement(expression->left, result);
+	if (expression->left->type == TSMAP_EXPRESSION &&
+		expression->left->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, ')');
+	}
+
+	switch (expression->operator)
+	{
+		case TSMAP_OP_UNION:
+			appendStringInfoString(result, " UNION ");
+			break;
+		case TSMAP_OP_EXCEPT:
+			appendStringInfoString(result, " EXCEPT ");
+			break;
+		case TSMAP_OP_INTERSECT:
+			appendStringInfoString(result, " INTERSECT ");
+			break;
+		case TSMAP_OP_COMMA:
+			appendStringInfoString(result, ", ");
+			break;
+		case TSMAP_OP_MAP:
+			appendStringInfoString(result, " MAP ");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains invalid expression operator.")));
+			break;
+	}
+
+	Assert(expression->right);
+	if (expression->right->type == TSMAP_EXPRESSION &&
+		expression->right->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, '(');
+	}
+	TSMapPrintElement(expression->right, result);
+	if (expression->right->type == TSMAP_EXPRESSION &&
+		expression->right->value.objectExpression->operator != expression->operator)
+	{
+		appendStringInfoChar(result, ')');
+	}
+}
+
+/*
+ * Print the case configuration construction into StringInfo variable result
+ */
+static void
+TSMapPrintCase(TSMapCase *caseObject, StringInfo result)
+{
+	appendStringInfoString(result, "CASE ");
+
+	TSMapPrintElement(caseObject->condition, result);
+
+	appendStringInfoString(result, " WHEN ");
+	if (!caseObject->match)
+		appendStringInfoString(result, "NO ");
+	appendStringInfoString(result, "MATCH THEN ");
+
+	TSMapPrintElement(caseObject->command, result);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		appendStringInfoString(result, "\nELSE ");
+		TSMapPrintElement(caseObject->elsebranch, result);
+	}
+	appendStringInfoString(result, "\nEND");
+}
+
+/*
+ * Print the element into StringInfo result.
+ * Uses other function and serves for element type detection.
+ */
+void
+TSMapPrintElement(TSMapElement *element, StringInfo result)
+{
+	switch (element->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapPrintExpression(element->value.objectExpression, result);
+			break;
+		case TSMAP_DICTIONARY:
+			TSMapPrintDictName(element->value.objectDictionary, result);
+			break;
+		case TSMAP_CASE:
+			TSMapPrintCase(element->value.objectCase, result);
+			break;
+		case TSMAP_KEEP:
+			appendStringInfoString(result, "KEEP");
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains elements with invalid type.")));
+			break;
+	}
+}
+
+/*
+ * Print the text search configuration as a text.
+ */
+Datum
+dictionary_mapping_to_text(PG_FUNCTION_ARGS)
+{
+	Oid			cfgOid = PG_GETARG_OID(0);
+	int32		tokentype = PG_GETARG_INT32(1);
+	StringInfo	rawResult;
+	text	   *result = NULL;
+	TSConfigCacheEntry *cacheEntry;
+
+	cacheEntry = lookup_ts_config_cache(cfgOid);
+	rawResult = makeStringInfo();
+	initStringInfo(rawResult);
+
+	if (cacheEntry->lenmap > tokentype && cacheEntry->map[tokentype] != NULL)
+	{
+		TSMapElement *element = cacheEntry->map[tokentype];
+
+		TSMapPrintElement(element, rawResult);
+	}
+
+	result = cstring_to_text(rawResult->data);
+	pfree(rawResult);
+	PG_RETURN_TEXT_P(result);
+}
+
+/* ----------------
+ * Functions used to convert TSMap structure into JSONB representation
+ * ----------------
+ */
+
+/*
+ * Convert an integer value into JsonbValue
+ */
+static JsonbValue *
+IntToJsonbValue(int intValue)
+{
+	char		buffer[16];
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	/*
+	 * String size is based on limit of int capacity up to 12 chars with sign
+	 * and NULL-character
+	 */
+	memset(buffer, 0, sizeof(char) * 12);
+
+	pg_ltoa(intValue, buffer);
+	value->type = jbvNumeric;
+	value->val.numeric = DatumGetNumeric(DirectFunctionCall3(numeric_in,
+															 CStringGetDatum(buffer),
+															 ObjectIdGetDatum(InvalidOid),
+															 Int32GetDatum(-1)
+															 ));
+	return value;
+}
+
+/*
+ * Convert a FTS configuration expression into JsonbValue
+ */
+static JsonbValue *
+TSMapExpressionToJsonbValue(TSMapExpression *expression, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("operator");
+	key.val.string.val = "operator";
+	value = IntToJsonbValue(expression->operator);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("left");
+	key.val.string.val = "left";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->left, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("right");
+	key.val.string.val = "right";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(expression->right, jsonbState);
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS configuration case into JsonbValue
+ */
+static JsonbValue *
+TSMapCaseToJsonbValue(TSMapCase *caseObject, JsonbParseState *jsonbState)
+{
+	JsonbValue	key;
+	JsonbValue *value = NULL;
+
+	pushJsonbValue(&jsonbState, WJB_BEGIN_OBJECT, NULL);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("condition");
+	key.val.string.val = "condition";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->condition, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	key.type = jbvString;
+	key.val.string.len = strlen("command");
+	key.val.string.val = "command";
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	value = TSMapElementToJsonbValue(caseObject->command, jsonbState);
+
+	if (value && IsAJsonbScalar(value))
+		pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	if (caseObject->elsebranch != NULL)
+	{
+		key.type = jbvString;
+		key.val.string.len = strlen("elsebranch");
+		key.val.string.val = "elsebranch";
+
+		pushJsonbValue(&jsonbState, WJB_KEY, &key);
+		value = TSMapElementToJsonbValue(caseObject->elsebranch, jsonbState);
+
+		if (value && IsAJsonbScalar(value))
+			pushJsonbValue(&jsonbState, WJB_VALUE, value);
+	}
+
+	key.type = jbvString;
+	key.val.string.len = strlen("match");
+	key.val.string.val = "match";
+
+	value = IntToJsonbValue(caseObject->match ? 1 : 0);
+
+	pushJsonbValue(&jsonbState, WJB_KEY, &key);
+	pushJsonbValue(&jsonbState, WJB_VALUE, value);
+
+	return pushJsonbValue(&jsonbState, WJB_END_OBJECT, NULL);
+}
+
+/*
+ * Convert a FTS KEEP command into JsonbValue
+ */
+static JsonbValue *
+TSMapKeepToJsonbValue(JsonbParseState *jsonbState)
+{
+	JsonbValue *value = palloc0(sizeof(JsonbValue));
+
+	value->type = jbvString;
+	value->val.string.len = strlen("keep");
+	value->val.string.val = "keep";
+
+	return pushJsonbValue(&jsonbState, WJB_VALUE, value);
+}
+
+/*
+ * Convert a FTS element into JsonbValue. Common point for all types of TSMapElement
+ */
+JsonbValue *
+TSMapElementToJsonbValue(TSMapElement *element, JsonbParseState *jsonbState)
+{
+	JsonbValue *result = NULL;
+
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_EXPRESSION:
+				result = TSMapExpressionToJsonbValue(element->value.objectExpression, jsonbState);
+				break;
+			case TSMAP_DICTIONARY:
+				result = IntToJsonbValue(element->value.objectDictionary);
+				break;
+			case TSMAP_CASE:
+				result = TSMapCaseToJsonbValue(element->value.objectCase, jsonbState);
+				break;
+			case TSMAP_KEEP:
+				result = TSMapKeepToJsonbValue(jsonbState);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Required text search configuration contains elements with invalid type.")));
+				break;
+		}
+	}
+	return result;
+}
+
+/*
+ * Convert a FTS configuration into JSONB
+ */
+Jsonb *
+TSMapToJsonb(TSMapElement *element)
+{
+	JsonbParseState *jsonbState = NULL;
+	JsonbValue *out;
+	Jsonb	   *result;
+
+	out = TSMapElementToJsonbValue(element, jsonbState);
+
+	result = JsonbValueToJsonb(out);
+	return result;
+}
+
+/* ----------------
+ * Functions used to get TSMap structure from JSONB representation
+ * ----------------
+ */
+
+/*
+ * Extract an integer from JsonbValue
+ */
+static int
+JsonbValueToInt(JsonbValue *value)
+{
+	char	   *str;
+
+	str = DatumGetCString(DirectFunctionCall1(numeric_out, NumericGetDatum(value->val.numeric)));
+	return pg_atoi(str, sizeof(int), 0);
+}
+
+/*
+ * Check is a key one of FTS configuration case fields
+ */
+static bool
+IsTSMapCaseKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "match") == 0 || strcmp(key, "condition") == 0 || strcmp(key, "command") == 0 || strcmp(key, "elsebranch") == 0;
+}
+
+/*
+ * Check is a key one of FTS configuration expression fields
+ */
+static bool
+IsTSMapExpressionKey(JsonbValue *value)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value->val.string.len + 1));
+
+	key[value->val.string.len] = '\0';
+	memcpy(key, value->val.string.val, sizeof(char) * value->val.string.len);
+	return strcmp(key, "operator") == 0 || strcmp(key, "left") == 0 || strcmp(key, "right") == 0;
+}
+
+/*
+ * Configure parseData->element according to value (key)
+ */
+static void
+JsonbBeginObjectKey(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *parentElement = parseData->element;
+
+	parseData->element = palloc0(sizeof(TSMapElement));
+	parseData->element->parent = parentElement;
+
+	/* Overwrite object-type state based on key */
+	if (IsTSMapExpressionKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_EXPRESSION;
+		parseData->element->type = TSMAP_EXPRESSION;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapExpression));
+	}
+	else if (IsTSMapCaseKey(&value))
+	{
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CASE;
+		parseData->element->type = TSMAP_CASE;
+		parseData->element->value.objectExpression = palloc0(sizeof(TSMapCase));
+	}
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration expression
+ */
+static void
+JsonbKeyExpressionProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "operator") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_OPERATOR;
+	else if (strcmp(key, "left") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_LEFT;
+	else if (strcmp(key, "right") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_RIGHT;
+}
+
+/*
+ * Process a JsonbValue inside a FTS configuration case
+ */
+static void
+JsonbKeyCaseProcessing(JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	/*
+	 * JsonbValue string may be not null-terminated. Convert it for appropriate
+	 * behavior of strcmp function.
+	 */
+	char	   *key = palloc0(sizeof(char) * (value.val.string.len + 1));
+
+	memcpy(key, value.val.string.val, sizeof(char) * value.val.string.len);
+	parseData->statesIndex++;
+
+	if (parseData->statesIndex >= JSONB_PARSE_STATE_STACK_SIZE)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("configuration is too complex to be parsed"),
+				 errdetail("Configurations with more than %d nested objected are not supported.",
+						   JSONB_PARSE_STATE_STACK_SIZE)));
+
+	if (strcmp(key, "condition") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_CONDITION;
+	else if (strcmp(key, "command") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_COMMAND;
+	else if (strcmp(key, "elsebranch") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_ELSEBRANCH;
+	else if (strcmp(key, "match") == 0)
+		parseData->states[parseData->statesIndex] = TSMPS_READ_MATCH;
+}
+
+/*
+ * Convert a JsonbValue into OID TSMapElement
+ */
+static TSMapElement *
+JsonbValueToOidElement(JsonbValue *value, TSMapElement *parent)
+{
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	element->type = TSMAP_DICTIONARY;
+	element->value.objectDictionary = JsonbValueToInt(value);
+	return element;
+}
+
+/*
+ * Convert a JsonbValue into string TSMapElement.
+ * Used for special values such as KEEP command
+ */
+static TSMapElement *
+JsonbValueReadString(JsonbValue *value, TSMapElement *parent)
+{
+	char	   *str;
+	TSMapElement *element = palloc0(sizeof(TSMapElement));
+
+	element->parent = parent;
+	str = palloc0(sizeof(char) * (value->val.string.len + 1));
+	memcpy(str, value->val.string.val, sizeof(char) * value->val.string.len);
+
+	if (strcmp(str, "keep") == 0)
+		element->type = TSMAP_KEEP;
+
+	pfree(str);
+
+	return element;
+}
+
+/*
+ * Process a JsonbValue object
+ */
+static void
+JsonbProcessElement(JsonbIteratorToken r, JsonbValue value, TSMapJsonbParseData *parseData)
+{
+	TSMapElement *element = NULL;
+
+	switch (r)
+	{
+		case WJB_KEY:
+
+			/*
+			 * Construct an TSMapElement object. At first key inside JSONB
+			 * object a type is selected based on key.
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMPLEX_OBJ)
+				JsonbBeginObjectKey(value, parseData);
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_EXPRESSION)
+				JsonbKeyExpressionProcessing(value, parseData);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CASE)
+				JsonbKeyCaseProcessing(value, parseData);
+
+			break;
+		case WJB_BEGIN_OBJECT:
+
+			/*
+			 * Begin construction of new object
+			 */
+			parseData->statesIndex++;
+			parseData->states[parseData->statesIndex] = TSMPS_READ_COMPLEX_OBJ;
+			break;
+		case WJB_END_OBJECT:
+
+			/*
+			 * Save constructed object based on current state of parser
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->parent->value.objectExpression->left = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->parent->value.objectExpression->right = parseData->element;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->parent->value.objectCase->condition = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->parent->value.objectCase->command = parseData->element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->parent->value.objectCase->elsebranch = parseData->element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_VALUE:
+
+			/*
+			 * Save a value inside constructing object
+			 */
+			if (value.type == jbvBinary)
+				element = JsonbToTSMapElement(value.val.binary.data);
+			else if (value.type == jbvString)
+				element = JsonbValueReadString(&value, parseData->element);
+			else if (value.type == jbvNumeric)
+				element = JsonbValueToOidElement(&value, parseData->element);
+			else
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains object with invalid type.")));
+
+			if (parseData->states[parseData->statesIndex] == TSMPS_READ_CONDITION)
+				parseData->element->value.objectCase->condition = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_COMMAND)
+				parseData->element->value.objectCase->command = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_ELSEBRANCH)
+				parseData->element->value.objectCase->elsebranch = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_MATCH)
+				parseData->element->value.objectCase->match = JsonbValueToInt(&value) == 1 ? true : false;
+
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_OPERATOR)
+				parseData->element->value.objectExpression->operator = JsonbValueToInt(&value);
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_LEFT)
+				parseData->element->value.objectExpression->left = element;
+			else if (parseData->states[parseData->statesIndex] == TSMPS_READ_RIGHT)
+				parseData->element->value.objectExpression->right = element;
+
+			parseData->statesIndex--;
+			Assert(parseData->statesIndex >= 0);
+			if (parseData->element->parent != NULL)
+				parseData->element = parseData->element->parent;
+			break;
+		case WJB_ELEM:
+
+			/*
+			 * Store a simple element such as dictionary OID
+			 */
+			if (parseData->states[parseData->statesIndex] == TSMPS_WAIT_ELEMENT)
+			{
+				if (parseData->element != NULL)
+					parseData->element = JsonbValueToOidElement(&value, parseData->element->parent);
+				else
+					parseData->element = JsonbValueToOidElement(&value, NULL);
+			}
+			break;
+		default:
+			/* Ignore unused JSONB tokens */
+			break;
+	}
+}
+
+/*
+ * Convert a JsonbContainer into TSMapElement
+ */
+static TSMapElement *
+JsonbToTSMapElement(JsonbContainer *root)
+{
+	TSMapJsonbParseData parseData;
+	JsonbIteratorToken r;
+	JsonbIterator *it;
+	JsonbValue	val;
+
+	parseData.statesIndex = 0;
+	parseData.states[parseData.statesIndex] = TSMPS_WAIT_ELEMENT;
+	parseData.element = NULL;
+
+	it = JsonbIteratorInit(root);
+
+	while ((r = JsonbIteratorNext(&it, &val, true)) != WJB_DONE)
+		JsonbProcessElement(r, val, &parseData);
+
+	return parseData.element;
+}
+
+/*
+ * Convert a JSONB into TSMapElement
+ */
+TSMapElement *
+JsonbToTSMap(Jsonb *json)
+{
+	JsonbContainer *root = &json->root;
+
+	return JsonbToTSMapElement(root);
+}
+
+/* ----------------
+ * Text Search Configuration Map Utils
+ * ----------------
+ */
+
+/*
+ * Dynamically extendable list of OIDs
+ */
+typedef struct OidList
+{
+	Oid		   *data;
+	int			size;			/* Size of data array. Uninitialized elements
+								 * in data filled with InvalidOid */
+} OidList;
+
+/*
+ * Initialize a list
+ */
+static OidList *
+OidListInit()
+{
+	OidList    *result = palloc0(sizeof(OidList));
+
+	result->size = 1;
+	result->data = palloc0(result->size * sizeof(Oid));
+	result->data[0] = InvalidOid;
+	return result;
+}
+
+/*
+ * Add a new OID into list. If it is already stored in list, it won't be add second time.
+ */
+static void
+OidListAdd(OidList *list, Oid oid)
+{
+	int			i;
+
+	/* Search for the Oid in the list */
+	for (i = 0; list->data[i] != InvalidOid; i++)
+		if (list->data[i] == oid)
+			return;
+
+	/* If not found, insert it in the end of the list */
+	if (i >= list->size - 1)
+	{
+		int			j;
+
+		list->size = list->size * 2;
+		list->data = repalloc(list->data, sizeof(Oid) * list->size);
+
+		for (j = i; j < list->size; j++)
+			list->data[j] = InvalidOid;
+	}
+	list->data[i] = oid;
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement.
+ * Used for internal recursive calls.
+ */
+static void
+TSMapGetDictionariesInternal(TSMapElement *config, OidList *list)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapGetDictionariesInternal(config->value.objectExpression->left, list);
+			TSMapGetDictionariesInternal(config->value.objectExpression->right, list);
+			break;
+		case TSMAP_CASE:
+			TSMapGetDictionariesInternal(config->value.objectCase->command, list);
+			TSMapGetDictionariesInternal(config->value.objectCase->condition, list);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapGetDictionariesInternal(config->value.objectCase->elsebranch, list);
+			break;
+		case TSMAP_DICTIONARY:
+			OidListAdd(list, config->value.objectDictionary);
+			break;
+	}
+}
+
+/*
+ * Get OIDs of all dictionaries used in TSMapElement
+ */
+Oid *
+TSMapGetDictionaries(TSMapElement *config)
+{
+	Oid		   *result;
+	OidList    *list = OidListInit();
+
+	TSMapGetDictionariesInternal(config, list);
+
+	result = list->data;
+	pfree(list);
+
+	return result;
+}
+
+/*
+ * Replace one dictionary OID with another in all instances inside a configuration
+ */
+void
+TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict)
+{
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			TSMapReplaceDictionary(config->value.objectExpression->left, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectExpression->right, oldDict, newDict);
+			break;
+		case TSMAP_CASE:
+			TSMapReplaceDictionary(config->value.objectCase->command, oldDict, newDict);
+			TSMapReplaceDictionary(config->value.objectCase->condition, oldDict, newDict);
+			if (config->value.objectCase->elsebranch != NULL)
+				TSMapReplaceDictionary(config->value.objectCase->elsebranch, oldDict, newDict);
+			break;
+		case TSMAP_DICTIONARY:
+			if (config->value.objectDictionary == oldDict)
+				config->value.objectDictionary = newDict;
+			break;
+	}
+}
+
+/* ----------------
+ * Text Search Configuration Map Memory Management
+ * ----------------
+ */
+
+/*
+ * Move a FTS configuration expression to another memory context
+ */
+static TSMapElement *
+TSMapExpressionMoveToMemoryContext(TSMapExpression *expression, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapExpression *resultExpression = MemoryContextAlloc(context, sizeof(TSMapExpression));
+
+	memset(resultExpression, 0, sizeof(TSMapExpression));
+	result->value.objectExpression = resultExpression;
+	result->type = TSMAP_EXPRESSION;
+
+	resultExpression->operator = expression->operator;
+
+	resultExpression->left = TSMapMoveToMemoryContext(expression->left, context);
+	resultExpression->left->parent = result;
+
+	resultExpression->right = TSMapMoveToMemoryContext(expression->right, context);
+	resultExpression->right->parent = result;
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration case to another memory context
+ */
+static TSMapElement *
+TSMapCaseMoveToMemoryContext(TSMapCase *caseObject, MemoryContext context)
+{
+	TSMapElement *result = MemoryContextAlloc(context, sizeof(TSMapElement));
+	TSMapCase  *resultCaseObject = MemoryContextAlloc(context, sizeof(TSMapCase));
+
+	memset(resultCaseObject, 0, sizeof(TSMapCase));
+	result->value.objectCase = resultCaseObject;
+	result->type = TSMAP_CASE;
+
+	resultCaseObject->match = caseObject->match;
+
+	resultCaseObject->command = TSMapMoveToMemoryContext(caseObject->command, context);
+	resultCaseObject->command->parent = result;
+
+	resultCaseObject->condition = TSMapMoveToMemoryContext(caseObject->condition, context);
+	resultCaseObject->condition->parent = result;
+
+	if (caseObject->elsebranch != NULL)
+	{
+		resultCaseObject->elsebranch = TSMapMoveToMemoryContext(caseObject->elsebranch, context);
+		resultCaseObject->elsebranch->parent = result;
+	}
+
+	return result;
+}
+
+/*
+ * Move a FTS configuration to another memory context
+ */
+TSMapElement *
+TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context)
+{
+	TSMapElement *result = NULL;
+
+	switch (config->type)
+	{
+		case TSMAP_EXPRESSION:
+			result = TSMapExpressionMoveToMemoryContext(config->value.objectExpression, context);
+			break;
+		case TSMAP_CASE:
+			result = TSMapCaseMoveToMemoryContext(config->value.objectCase, context);
+			break;
+		case TSMAP_DICTIONARY:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_DICTIONARY;
+			result->value.objectDictionary = config->value.objectDictionary;
+			break;
+		case TSMAP_KEEP:
+			result = MemoryContextAlloc(context, sizeof(TSMapElement));
+			result->type = TSMAP_KEEP;
+			result->value.object = NULL;
+			break;
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_DATA_CORRUPTED),
+					 errmsg("text search configuration is invalid"),
+					 errdetail("Text search configuration contains object with invalid type.")));
+			break;
+	}
+
+	return result;
+}
+
+/*
+ * Free memory occupied by FTS configuration expression
+ */
+static void
+TSMapExpressionFree(TSMapExpression *expression)
+{
+	if (expression->left)
+		TSMapElementFree(expression->left);
+	if (expression->right)
+		TSMapElementFree(expression->right);
+	pfree(expression);
+}
+
+/*
+ * Free memory occupied by FTS configuration case
+ */
+static void
+TSMapCaseFree(TSMapCase *caseObject)
+{
+	TSMapElementFree(caseObject->condition);
+	TSMapElementFree(caseObject->command);
+	TSMapElementFree(caseObject->elsebranch);
+	pfree(caseObject);
+}
+
+/*
+ * Free memory occupied by FTS configuration element
+ */
+void
+TSMapElementFree(TSMapElement *element)
+{
+	if (element != NULL)
+	{
+		switch (element->type)
+		{
+			case TSMAP_CASE:
+				TSMapCaseFree(element->value.objectCase);
+				break;
+			case TSMAP_EXPRESSION:
+				TSMapExpressionFree(element->value.objectExpression);
+				break;
+		}
+		pfree(element);
+	}
+}
+
+/*
+ * Do a deep comparison of two TSMapElements. Doesn't check parents of elements
+ */
+bool
+TSMapElementEquals(TSMapElement *a, TSMapElement *b)
+{
+	bool		result = true;
+
+	if (a->type == b->type)
+	{
+		switch (a->type)
+		{
+			case TSMAP_CASE:
+				if (!TSMapElementEquals(a->value.objectCase->condition, b->value.objectCase->condition))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectCase->command, b->value.objectCase->command))
+					result = false;
+
+				if (a->value.objectCase->elsebranch != NULL && b->value.objectCase->elsebranch != NULL)
+				{
+					if (!TSMapElementEquals(a->value.objectCase->elsebranch, b->value.objectCase->elsebranch))
+						result = false;
+				}
+				else if (a->value.objectCase->elsebranch != NULL || b->value.objectCase->elsebranch != NULL)
+					result = false;
+
+				if (a->value.objectCase->match != b->value.objectCase->match)
+					result = false;
+				break;
+			case TSMAP_EXPRESSION:
+				if (!TSMapElementEquals(a->value.objectExpression->left, b->value.objectExpression->left))
+					result = false;
+				if (!TSMapElementEquals(a->value.objectExpression->right, b->value.objectExpression->right))
+					result = false;
+				if (a->value.objectExpression->operator != b->value.objectExpression->operator)
+					result = false;
+				break;
+			case TSMAP_DICTIONARY:
+				result = a->value.objectDictionary == b->value.objectDictionary;
+				break;
+			case TSMAP_KEEP:
+				result = true;
+		}
+	}
+	else
+		result = false;
+
+	return result;
+}
diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c
index 7b69ef5660..f476abb323 100644
--- a/src/backend/tsearch/ts_parse.c
+++ b/src/backend/tsearch/ts_parse.c
@@ -16,58 +16,157 @@
 
 #include "tsearch/ts_cache.h"
 #include "tsearch/ts_utils.h"
+#include "tsearch/ts_configmap.h"
+#include "utils/builtins.h"
+#include "funcapi.h"
 
 #define IGNORE_LONGLEXEME	1
 
-/*
+/*-------------------
  * Lexize subsystem
+ *-------------------
  */
 
+/*
+ * Representation of token produced by FTS parser. It contains intermediate
+ * lexemes in case of phrase dictionary processing.
+ */
 typedef struct ParsedLex
 {
-	int			type;
-	char	   *lemm;
-	int			lenlemm;
-	struct ParsedLex *next;
+	int			type;			/* Token type */
+	char	   *lemm;			/* Token itself */
+	int			lenlemm;		/* Length of the token string */
+	int			maplen;			/* Length of the map */
+	bool	   *accepted;		/* Is accepted by some dictionary */
+	bool	   *rejected;		/* Is rejected by all dictionaries */
+	bool	   *notFinished;	/* Some dictionary not finished processing and
+								 * waits for more tokens */
+	struct ParsedLex *next;		/* Next token in the list */
+	TSMapElement *relatedRule;	/* Rule which is used to produce lexemes from
+								 * the token */
 } ParsedLex;
 
+/*
+ * List of tokens produced by FTS parser.
+ */
 typedef struct ListParsedLex
 {
 	ParsedLex  *head;
 	ParsedLex  *tail;
 } ListParsedLex;
 
-typedef struct
+/*
+ * Dictionary state shared between processing of different tokens
+ */
+typedef struct DictState
 {
-	TSConfigCacheEntry *cfg;
-	Oid			curDictId;
-	int			posDict;
-	DictSubState dictState;
-	ParsedLex  *curSub;
-	ListParsedLex towork;		/* current list to work */
-	ListParsedLex waste;		/* list of lexemes that already lexized */
+	Oid			relatedDictionary;	/* DictState contains state of dictionary
+									 * with this Oid */
+	DictSubState subState;		/* Internal state of the dictionary used to
+								 * store some state between dictionary calls */
+	ListParsedLex acceptedTokens;	/* Tokens which are processed and
+									 * accepted, used in last returned result
+									 * by the dictionary */
+	ListParsedLex intermediateTokens;	/* Tokens which are not accepted, but
+										 * were processed by thesaurus-like
+										 * dictionary */
+	bool		storeToAccepted;	/* Should current token be appended to
+									 * accepted or intermediate tokens */
+	bool		processed;		/* Is the dictionary take control during
+								 * current token processing */
+	TSLexeme   *tmpResult;		/* Last result returned by thesaurus-like
+								 * dictionary, if dictionary still waiting for
+								 * more lexemes */
+} DictState;
 
-	/*
-	 * fields to store last variant to lexize (basically, thesaurus or similar
-	 * to, which wants	several lexemes
-	 */
+/*
+ * List of dictionary states
+ */
+typedef struct DictStateList
+{
+	int			listLength;
+	DictState  *states;
+} DictStateList;
 
-	ParsedLex  *lastRes;
-	TSLexeme   *tmpRes;
+/*
+ * Buffer entry with lexemes produced from current token
+ */
+typedef struct LexemesBufferEntry
+{
+	TSMapElement *key;	/* Element of the mapping configuration produced the entry */
+	ParsedLex  *token;	/* Token used for production of the lexemes */
+	TSLexeme   *data;	/* Lexemes produced from current token */
+} LexemesBufferEntry;
+
+/*
+ * Buffer with lexemes produced from current token
+ */
+typedef struct LexemesBuffer
+{
+	int			size;
+	LexemesBufferEntry *data;
+} LexemesBuffer;
+
+/*
+ * Storage for accepted and possible accepted lexemes
+ */
+typedef struct ResultStorage
+{
+	TSLexeme   *lexemes;		/* Processed lexemes, which is not yet
+								 * accepted */
+	TSLexeme   *accepted;		/* Already accepted lexemes */
+} ResultStorage;
+
+/*
+ * FTS processing context
+ */
+typedef struct LexizeData
+{
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	DictStateList dslist;		/* List of all currently stored states of
+								 * dictionaries */
+	ListParsedLex towork;		/* Current list to work */
+	ListParsedLex waste;		/* List of lexemes that already lexized */
+	LexemesBuffer buffer;		/* Buffer of processed lexemes. Used to avoid
+								 * multiple execution of token lexize process
+								 * with same parameters */
+	ResultStorage delayedResults;	/* Results that should be returned but may
+									 * be rejected in future */
+	Oid			skipDictionary; /* The dictionary we should skip during
+								 * processing. Used to avoid infinite loop in
+								 * configuration with phrase dictionary */
+	bool		debugContext;	/* If true, relatedRule attribute is filled */
 } LexizeData;
 
-static void
-LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+/*
+ * FTS processing debug context. Used during ts_debug calls.
+ */
+typedef struct TSDebugContext
 {
-	ld->cfg = cfg;
-	ld->curDictId = InvalidOid;
-	ld->posDict = 0;
-	ld->towork.head = ld->towork.tail = ld->curSub = NULL;
-	ld->waste.head = ld->waste.tail = NULL;
-	ld->lastRes = NULL;
-	ld->tmpRes = NULL;
-}
+	TSConfigCacheEntry *cfg;	/* Text search configuration mappings for
+								 * current configuration */
+	TSParserCacheEntry *prsobj; /* Parser context of current ts_debug context */
+	LexDescr   *tokenTypes;		/* Token types supported by current parser */
+	void	   *prsdata;		/* Parser data of current ts_debug context */
+	LexizeData	ldata;			/* Lexize data of current ts_debug context */
+	int			tokentype;		/* Last token tokentype */
+	TSLexeme   *savedLexemes;	/* Last token lexemes stored for ts_debug
+								 * output */
+	ParsedLex  *leftTokens;		/* Corresponded ParsedLex */
+} TSDebugContext;
+
+static TSLexeme *TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression);
+static TSLexeme *LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config);
+
+/*-------------------
+ * ListParsedLex API
+ *-------------------
+ */
 
+/*
+ * Add a ParsedLex to the end of the list
+ */
 static void
 LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 {
@@ -81,274 +180,1291 @@ LPLAddTail(ListParsedLex *list, ParsedLex *newpl)
 	newpl->next = NULL;
 }
 
-static ParsedLex *
-LPLRemoveHead(ListParsedLex *list)
-{
-	ParsedLex  *res = list->head;
+/*
+ * Add a copy of ParsedLex to the end of the list
+ */
+static void
+LPLAddTailCopy(ListParsedLex *list, ParsedLex *newpl)
+{
+	ParsedLex  *copy = palloc0(sizeof(ParsedLex));
+
+	copy->lenlemm = newpl->lenlemm;
+	copy->type = newpl->type;
+	copy->lemm = newpl->lemm;
+	copy->relatedRule = newpl->relatedRule;
+	copy->next = NULL;
+
+	if (list->tail)
+	{
+		list->tail->next = copy;
+		list->tail = copy;
+	}
+	else
+		list->head = list->tail = copy;
+}
+
+/*
+ * Remove the head of the list. Return pointer to detached head
+ */
+static ParsedLex *
+LPLRemoveHead(ListParsedLex *list)
+{
+	ParsedLex  *res = list->head;
+
+	if (list->head)
+		list->head = list->head->next;
+
+	if (list->head == NULL)
+		list->tail = NULL;
+
+	return res;
+}
+
+/*
+ * Remove all ParsedLex from the list
+ */
+static void
+LPLClear(ListParsedLex *list)
+{
+	ParsedLex  *tmp,
+			   *ptr = list->head;
+
+	while (ptr)
+	{
+		tmp = ptr->next;
+		pfree(ptr);
+		ptr = tmp;
+	}
+
+	list->head = list->tail = NULL;
+}
+
+/*-------------------
+ * LexizeData manipulation functions
+ *-------------------
+ */
+
+/*
+ * Initialize empty LexizeData object
+ */
+static void
+LexizeInit(LexizeData *ld, TSConfigCacheEntry *cfg)
+{
+	ld->cfg = cfg;
+	ld->skipDictionary = InvalidOid;
+	ld->towork.head = ld->towork.tail = NULL;
+	ld->waste.head = ld->waste.tail = NULL;
+	ld->dslist.listLength = 0;
+	ld->dslist.states = NULL;
+	ld->buffer.size = 0;
+	ld->buffer.data = NULL;
+	ld->delayedResults.lexemes = NULL;
+	ld->delayedResults.accepted = NULL;
+}
+
+/*
+ * Add a token to the processing queue
+ */
+static void
+LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
+{
+	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+
+	newpl->type = type;
+	newpl->lemm = lemm;
+	newpl->lenlemm = lenlemm;
+	newpl->relatedRule = NULL;
+	LPLAddTail(&ld->towork, newpl);
+}
+
+/*
+ * Remove head of the processing queue
+ */
+static void
+RemoveHead(LexizeData *ld)
+{
+	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+}
+
+/*
+ * Set token corresponded to current lexeme
+ */
+static void
+setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+{
+	if (correspondLexem)
+		*correspondLexem = ld->waste.head;
+	else
+		LPLClear(&ld->waste);
+
+	ld->waste.head = ld->waste.tail = NULL;
+}
+
+/*-------------------
+ * DictState manipulation functions
+ *-------------------
+ */
+
+/*
+ * Get a state of dictionary based on its OID
+ */
+static DictState *
+DictStateListGet(DictStateList *list, Oid dictId)
+{
+	int			i;
+	DictState  *result = NULL;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			result = &list->states[i];
+
+	return result;
+}
+
+/*
+ * Remove a state of dictionary based on its OID
+ */
+static void
+DictStateListRemove(DictStateList *list, Oid dictId)
+{
+	int			i;
+
+	for (i = 0; i < list->listLength; i++)
+		if (list->states[i].relatedDictionary == dictId)
+			break;
+
+	if (i != list->listLength)
+	{
+		memcpy(list->states + i, list->states + i + 1, sizeof(DictState) * (list->listLength - i - 1));
+		list->listLength--;
+		if (list->listLength == 0)
+			list->states = NULL;
+		else
+			list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	}
+}
+
+/*
+ * Insert a state of dictionary with specified OID
+ */
+static DictState *
+DictStateListAdd(DictStateList *list, DictState *state)
+{
+	DictStateListRemove(list, state->relatedDictionary);
+
+	list->listLength++;
+	if (list->states)
+		list->states = repalloc(list->states, sizeof(DictState) * list->listLength);
+	else
+		list->states = palloc0(sizeof(DictState) * list->listLength);
+
+	memcpy(list->states + list->listLength - 1, state, sizeof(DictState));
+
+	return list->states + list->listLength - 1;
+}
+
+/*
+ * Remove states of all dictionaries
+ */
+static void
+DictStateListClear(DictStateList *list)
+{
+	list->listLength = 0;
+	if (list->states)
+		pfree(list->states);
+	list->states = NULL;
+}
+
+/*-------------------
+ * LexemesBuffer manipulation functions
+ *-------------------
+ */
+
+/*
+ * Check if there is a saved lexeme generated by specified TSMapElement
+ */
+static bool
+LexemesBufferContains(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			return true;
+
+	return false;
+}
+
+/*
+ * Get a saved lexeme generated by specified TSMapElement
+ */
+static TSLexeme *
+LexemesBufferGet(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+	TSLexeme   *result = NULL;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			result = buffer->data[i].data;
+
+	return result;
+}
+
+/*
+ * Remove a saved lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferRemove(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token)
+{
+	int			i;
+
+	for (i = 0; i < buffer->size; i++)
+		if (TSMapElementEquals(buffer->data[i].key, key) && buffer->data[i].token == token)
+			break;
+
+	if (i != buffer->size)
+	{
+		memcpy(buffer->data + i, buffer->data + i + 1, sizeof(LexemesBufferEntry) * (buffer->size - i - 1));
+		buffer->size--;
+		if (buffer->size == 0)
+			buffer->data = NULL;
+		else
+			buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	}
+}
+
+/*
+ * Same a lexeme generated by specified TSMapElement
+ */
+static void
+LexemesBufferAdd(LexemesBuffer *buffer, TSMapElement *key, ParsedLex *token, TSLexeme *data)
+{
+	LexemesBufferRemove(buffer, key, token);
+
+	buffer->size++;
+	if (buffer->data)
+		buffer->data = repalloc(buffer->data, sizeof(LexemesBufferEntry) * buffer->size);
+	else
+		buffer->data = palloc0(sizeof(LexemesBufferEntry) * buffer->size);
+
+	buffer->data[buffer->size - 1].token = token;
+	buffer->data[buffer->size - 1].key = key;
+	buffer->data[buffer->size - 1].data = data;
+}
+
+/*
+ * Remove all lexemes saved in a buffer
+ */
+static void
+LexemesBufferClear(LexemesBuffer *buffer)
+{
+	int			i;
+	bool	   *skipEntry = palloc0(sizeof(bool) * buffer->size);
+
+	for (i = 0; i < buffer->size; i++)
+	{
+		if (buffer->data[i].data != NULL && !skipEntry[i])
+		{
+			int			j;
+
+			for (j = 0; j < buffer->size; j++)
+				if (buffer->data[i].data == buffer->data[j].data)
+					skipEntry[j] = true;
+
+			pfree(buffer->data[i].data);
+		}
+	}
+
+	buffer->size = 0;
+	if (buffer->data)
+		pfree(buffer->data);
+	buffer->data = NULL;
+}
+
+/*-------------------
+ * TSLexeme util functions
+ *-------------------
+ */
+
+/*
+ * Get size of TSLexeme except empty-lexeme
+ */
+static int
+TSLexemeGetSize(TSLexeme *lex)
+{
+	int			result = 0;
+	TSLexeme   *ptr = lex;
+
+	while (ptr && ptr->lexeme)
+	{
+		result++;
+		ptr++;
+	}
+
+	return result;
+}
+
+/*
+ * Remove repeated lexemes. Also remove copies of whole nvariant groups.
+ */
+static TSLexeme *
+TSLexemeRemoveDuplications(TSLexeme *lexeme)
+{
+	TSLexeme   *res;
+	int			curLexIndex;
+	int			i;
+	int			lexemeSize = TSLexemeGetSize(lexeme);
+	int			shouldCopyCount = lexemeSize;
+	bool	   *shouldCopy;
+
+	if (lexeme == NULL)
+		return NULL;
+
+	shouldCopy = palloc(sizeof(bool) * lexemeSize);
+	memset(shouldCopy, true, sizeof(bool) * lexemeSize);
+
+	for (curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		for (i = curLexIndex + 1; i < lexemeSize; i++)
+		{
+			if (!shouldCopy[i])
+				continue;
+
+			if (strcmp(lexeme[curLexIndex].lexeme, lexeme[i].lexeme) == 0)
+			{
+				if (lexeme[curLexIndex].nvariant == lexeme[i].nvariant)
+				{
+					shouldCopy[i] = false;
+					shouldCopyCount--;
+					continue;
+				}
+				else
+				{
+					/*
+					 * Check for same set of lexemes in another nvariant
+					 * series
+					 */
+					int			nvariantCountL = 0;
+					int			nvariantCountR = 0;
+					int			nvariantOverlap = 1;
+					int			j;
+
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[curLexIndex].nvariant == lexeme[j].nvariant)
+							nvariantCountL++;
+					for (j = 0; j < lexemeSize; j++)
+						if (lexeme[i].nvariant == lexeme[j].nvariant)
+							nvariantCountR++;
+
+					if (nvariantCountL != nvariantCountR)
+						continue;
+
+					for (j = 1; j < nvariantCountR; j++)
+					{
+						if (strcmp(lexeme[curLexIndex + j].lexeme, lexeme[i + j].lexeme) == 0
+							&& lexeme[curLexIndex + j].nvariant == lexeme[i + j].nvariant)
+							nvariantOverlap++;
+					}
+
+					if (nvariantOverlap != nvariantCountR)
+						continue;
+
+					for (j = 0; j < nvariantCountR; j++)
+						shouldCopy[i + j] = false;
+				}
+			}
+		}
+	}
+
+	res = palloc0(sizeof(TSLexeme) * (shouldCopyCount + 1));
+
+	for (i = 0, curLexIndex = 0; curLexIndex < lexemeSize; curLexIndex++)
+	{
+		if (shouldCopy[curLexIndex])
+		{
+			memcpy(res + i, lexeme + curLexIndex, sizeof(TSLexeme));
+			i++;
+		}
+	}
+
+	pfree(shouldCopy);
+	return res;
+}
+
+/*
+ * Combine two lexeme lists with respect to positions
+ */
+static TSLexeme *
+TSLexemeMergePositions(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+
+	if (left != NULL || right != NULL)
+	{
+		int			left_i = 0;
+		int			right_i = 0;
+		int			left_max_nvariant = 0;
+		int			i;
+		int			left_size = TSLexemeGetSize(left);
+		int			right_size = TSLexemeGetSize(right);
+
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		for (i = 0; i < right_size; i++)
+			right[i].nvariant += left_max_nvariant;
+		if (right && right[0].flags & TSL_ADDPOS)
+			right[0].flags &= ~TSL_ADDPOS;
+
+		i = 0;
+		while (i < left_size + right_size)
+		{
+			if (left_i < left_size)
+			{
+				do
+				{
+					result[i++] = left[left_i++];
+				} while (left && left[left_i].lexeme && (left[left_i].flags & TSL_ADDPOS) == 0);
+			}
+
+			if (right_i < right_size)
+			{
+				do
+				{
+					result[i++] = right[right_i++];
+				} while (right && right[right_i].lexeme && (right[right_i].flags & TSL_ADDPOS) == 0);
+			}
+		}
+	}
+	return result;
+}
+
+/*
+ * Split lexemes generated by regular dictionaries and multi-input dictionaries
+ * and combine them with respect to positions
+ */
+static TSLexeme *
+TSLexemeFilterMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *result;
+	TSLexeme   *ptr = lexemes;
+	int			multi_lexemes = 0;
+
+	while (ptr && ptr->lexeme)
+	{
+		if (ptr->flags & TSL_MULTI)
+			multi_lexemes++;
+		ptr++;
+	}
+
+	if (multi_lexemes > 0)
+	{
+		TSLexeme   *lexemes_multi = palloc0(sizeof(TSLexeme) * (multi_lexemes + 1));
+		TSLexeme   *lexemes_rest = palloc0(sizeof(TSLexeme) * (TSLexemeGetSize(lexemes) - multi_lexemes + 1));
+		int			rest_i = 0;
+		int			multi_i = 0;
+
+		ptr = lexemes;
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr->flags & TSL_MULTI)
+				lexemes_multi[multi_i++] = *ptr;
+			else
+				lexemes_rest[rest_i++] = *ptr;
+
+			ptr++;
+		}
+		result = TSLexemeMergePositions(lexemes_rest, lexemes_multi);
+	}
+	else
+	{
+		result = TSLexemeMergePositions(lexemes, NULL);
+	}
+
+	return result;
+}
+
+/*
+ * Mark lexemes as generated by multi-input (thesaurus-like) dictionary
+ */
+static void
+TSLexemeMarkMulti(TSLexeme *lexemes)
+{
+	TSLexeme   *ptr = lexemes;
+
+	while (ptr && ptr->lexeme)
+	{
+		ptr->flags |= TSL_MULTI;
+		ptr++;
+	}
+}
+
+/*-------------------
+ * Lexemes set operations
+ *-------------------
+ */
+
+/*
+ * Combine left and right lexeme lists into one.
+ * If append is true, right lexemes added after last left lexeme with TSL_ADDPOS flag
+ */
+static TSLexeme *
+TSLexemeUnionOpt(TSLexeme *left, TSLexeme *right, bool append)
+{
+	TSLexeme   *result;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+	int			left_max_nvariant = 0;
+	int			i;
+
+	if (left == NULL && right == NULL)
+	{
+		result = NULL;
+	}
+	else
+	{
+		result = palloc0(sizeof(TSLexeme) * (left_size + right_size + 1));
+
+		for (i = 0; i < left_size; i++)
+			if (left[i].nvariant > left_max_nvariant)
+				left_max_nvariant = left[i].nvariant;
+
+		if (left_size > 0)
+			memcpy(result, left, sizeof(TSLexeme) * left_size);
+		if (right_size > 0)
+			memcpy(result + left_size, right, sizeof(TSLexeme) * right_size);
+		if (append && left_size > 0 && right_size > 0)
+			result[left_size].flags |= TSL_ADDPOS;
+
+		for (i = left_size; i < left_size + right_size; i++)
+			result[i].nvariant += left_max_nvariant;
+	}
+
+	return result;
+}
+
+/*
+ * Combine left and right lexeme lists into one
+ */
+static TSLexeme *
+TSLexemeUnion(TSLexeme *left, TSLexeme *right)
+{
+	return TSLexemeUnionOpt(left, right, false);
+}
+
+/*
+ * Remove common lexemes and return only which is stored in left list
+ */
+static TSLexeme *
+TSLexemeExcept(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (!found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*
+ * Keep only common lexemes
+ */
+static TSLexeme *
+TSLexemeIntersect(TSLexeme *left, TSLexeme *right)
+{
+	TSLexeme   *result = NULL;
+	int			i,
+				j,
+				k;
+	int			left_size = TSLexemeGetSize(left);
+	int			right_size = TSLexemeGetSize(right);
+
+	result = palloc0(sizeof(TSLexeme) * (left_size + 1));
+
+	for (k = 0, i = 0; i < left_size; i++)
+	{
+		bool		found = false;
+
+		for (j = 0; j < right_size; j++)
+			if (strcmp(left[i].lexeme, right[j].lexeme) == 0)
+				found = true;
+
+		if (found)
+			result[k++] = left[i];
+	}
+
+	return result;
+}
+
+/*-------------------
+ * Result storage functions
+ *-------------------
+ */
+
+/*
+ * Add a lexeme to the result storage
+ */
+static void
+ResultStorageAdd(ResultStorage *storage, ParsedLex *token, TSLexeme *lexs)
+{
+	TSLexeme   *oldLexs = storage->lexemes;
+
+	storage->lexemes = TSLexemeUnionOpt(storage->lexemes, lexs, true);
+	if (oldLexs)
+		pfree(oldLexs);
+}
+
+/*
+ * Move all saved lexemes to accepted list
+ */
+static void
+ResultStorageMoveToAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+	{
+		TSLexeme   *prevAccepted = storage->accepted;
+
+		storage->accepted = TSLexemeUnionOpt(storage->accepted, storage->lexemes, true);
+		if (prevAccepted)
+			pfree(prevAccepted);
+		if (storage->lexemes)
+			pfree(storage->lexemes);
+	}
+	else
+	{
+		storage->accepted = storage->lexemes;
+	}
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all non-accepted lexemes
+ */
+static void
+ResultStorageClearLexemes(ResultStorage *storage)
+{
+	if (storage->lexemes)
+		pfree(storage->lexemes);
+	storage->lexemes = NULL;
+}
+
+/*
+ * Remove all accepted lexemes
+ */
+static void
+ResultStorageClearAccepted(ResultStorage *storage)
+{
+	if (storage->accepted)
+		pfree(storage->accepted);
+	storage->accepted = NULL;
+}
+
+/*-------------------
+ * Condition and command execution
+ *-------------------
+ */
+
+/*
+ * Process a token by the dictionary
+ */
+static TSLexeme *
+LexizeExecDictionary(LexizeData *ld, ParsedLex *token, TSMapElement *dictionary)
+{
+	TSLexeme   *res;
+	TSDictionaryCacheEntry *dict;
+	DictSubState subState;
+	Oid			dictId = dictionary->value.objectDictionary;
+
+	if (ld->skipDictionary == dictId)
+		return NULL;
+
+	if (LexemesBufferContains(&ld->buffer, dictionary, token))
+		res = LexemesBufferGet(&ld->buffer, dictionary, token);
+	else
+	{
+		char	   *curValLemm = token->lemm;
+		int			curValLenLemm = token->lenlemm;
+		DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+		dict = lookup_ts_dictionary_cache(dictId);
+
+		if (state)
+		{
+			subState = state->subState;
+			state->processed = true;
+		}
+		else
+		{
+			subState.isend = subState.getnext = false;
+			subState.private_state = NULL;
+		}
+
+		res = (TSLexeme *) DatumGetPointer(FunctionCall4(&(dict->lexize),
+														 PointerGetDatum(dict->dictData),
+														 PointerGetDatum(curValLemm),
+														 Int32GetDatum(curValLenLemm),
+														 PointerGetDatum(&subState)
+														 ));
+
+		if (subState.getnext)
+		{
+			/*
+			 * Dictionary wants next word, so store current context and state
+			 * in the DictStateList
+			 */
+			if (state == NULL)
+			{
+				state = palloc0(sizeof(DictState));
+				state->processed = true;
+				state->relatedDictionary = dictId;
+				state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				state->acceptedTokens.head = state->acceptedTokens.tail = NULL;
+				state->tmpResult = NULL;
+
+				/*
+				 * Add state to the list and update pointer in order to work
+				 * with copy from the list
+				 */
+				state = DictStateListAdd(&ld->dslist, state);
+			}
+
+			state->subState = subState;
+			state->storeToAccepted = res != NULL;
+
+			if (res)
+			{
+				if (state->intermediateTokens.head != NULL)
+				{
+					ParsedLex  *ptr = state->intermediateTokens.head;
+
+					while (ptr)
+					{
+						LPLAddTailCopy(&state->acceptedTokens, ptr);
+						ptr = ptr->next;
+					}
+					state->intermediateTokens.head = state->intermediateTokens.tail = NULL;
+				}
+
+				if (state->tmpResult)
+					pfree(state->tmpResult);
+				TSLexemeMarkMulti(res);
+				state->tmpResult = res;
+				res = NULL;
+			}
+		}
+		else if (state != NULL)
+		{
+			if (res)
+			{
+				if (state)
+					TSLexemeMarkMulti(res);
+				DictStateListRemove(&ld->dslist, dictId);
+			}
+			else
+			{
+				/*
+				 * Trigger post-processing in order to check tmpResult and
+				 * restart processing (see LexizeExec function)
+				 */
+				state->processed = false;
+			}
+		}
+		LexemesBufferAdd(&ld->buffer, dictionary, token, res);
+	}
+
+	return res;
+}
+
+/*
+ * Check is dictionary waits for more tokens or not
+ */
+static bool
+LexizeExecDictionaryWaitNext(LexizeData *ld, Oid dictId)
+{
+	DictState  *state = DictStateListGet(&ld->dslist, dictId);
+
+	if (state)
+		return state->subState.getnext;
+	else
+		return false;
+}
+
+/*
+ * Check is dictionary result for current token is NULL or not.
+ * It dictionary waits for more lexemes, the result is interpreted as not null.
+ */
+static bool
+LexizeExecIsNull(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	bool		result = false;
+
+	if (config->type == TSMAP_EXPRESSION)
+	{
+		TSMapExpression *expression = config->value.objectExpression;
+
+		result = LexizeExecIsNull(ld, token, expression->left) || LexizeExecIsNull(ld, token, expression->right);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		Oid			dictOid = config->value.objectDictionary;
+		TSLexeme   *lexemes = LexizeExecDictionary(ld, token, config);
+
+		if (lexemes)
+			result = false;
+		else
+			result = !LexizeExecDictionaryWaitNext(ld, dictOid);
+	}
+	return result;
+}
+
+/*
+ * Execute a MAP operator
+ */
+static TSLexeme *
+TSLexemeMap(LexizeData *ld, ParsedLex *token, TSMapExpression *expression)
+{
+	TSLexeme   *left_res;
+	TSLexeme   *result = NULL;
+	int			left_size;
+	int			i;
+
+	left_res = LexizeExecTSElement(ld, token, expression->left);
+	left_size = TSLexemeGetSize(left_res);
+
+	if (left_res == NULL && LexizeExecIsNull(ld, token, expression->left))
+		result = LexizeExecTSElement(ld, token, expression->right);
+	else if (expression->operator == TSMAP_OP_COMMA &&
+			((left_res != NULL && (left_res->flags & TSL_FILTER) == 0) || left_res == NULL))
+		result = left_res;
+	else
+	{
+		TSMapElement *relatedRuleTmp = NULL;
+		relatedRuleTmp = palloc0(sizeof(TSMapElement));
+		relatedRuleTmp->parent = NULL;
+		relatedRuleTmp->type = TSMAP_EXPRESSION;
+		relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+		relatedRuleTmp->value.objectExpression->operator = expression->operator;
+		relatedRuleTmp->value.objectExpression->left = token->relatedRule;
+
+		for (i = 0; i < left_size; i++)
+		{
+			TSLexeme   *tmp_res = NULL;
+			TSLexeme   *prev_res;
+			ParsedLex	tmp_token;
+
+			tmp_token.lemm = left_res[i].lexeme;
+			tmp_token.lenlemm = strlen(left_res[i].lexeme);
+			tmp_token.type = token->type;
+			tmp_token.next = NULL;
+
+			tmp_res = LexizeExecTSElement(ld, &tmp_token, expression->right);
+			relatedRuleTmp->value.objectExpression->right = tmp_token.relatedRule;
+			prev_res = result;
+			result = TSLexemeUnion(prev_res, tmp_res);
+			if (prev_res)
+				pfree(prev_res);
+		}
+		token->relatedRule = relatedRuleTmp;
+	}
+
+	return result;
+}
+
+/*
+ * Execute a TSMapElement
+ * Common point of all possible types of TSMapElement
+ */
+static TSLexeme *
+LexizeExecTSElement(LexizeData *ld, ParsedLex *token, TSMapElement *config)
+{
+	TSLexeme   *result = NULL;
+
+	if (LexemesBufferContains(&ld->buffer, config, token))
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexemesBufferGet(&ld->buffer, config, token);
+	}
+	else if (config->type == TSMAP_DICTIONARY)
+	{
+		if (ld->debugContext)
+			token->relatedRule = config;
+		result = LexizeExecDictionary(ld, token, config);
+	}
+	else if (config->type == TSMAP_CASE)
+	{
+		TSMapCase  *caseObject = config->value.objectCase;
+		bool		conditionIsNull = LexizeExecIsNull(ld, token, caseObject->condition);
+
+		if ((!conditionIsNull && caseObject->match) || (conditionIsNull && !caseObject->match))
+		{
+			if (caseObject->command->type == TSMAP_KEEP)
+				result = LexizeExecTSElement(ld, token, caseObject->condition);
+			else
+				result = LexizeExecTSElement(ld, token, caseObject->command);
+		}
+		else if (caseObject->elsebranch)
+			result = LexizeExecTSElement(ld, token, caseObject->elsebranch);
+	}
+	else if (config->type == TSMAP_EXPRESSION)
+	{
+		TSLexeme   *resLeft = NULL;
+		TSLexeme   *resRight = NULL;
+		TSMapElement *relatedRuleTmp = NULL;
+		TSMapExpression *expression = config->value.objectExpression;
+
+		if (expression->operator != TSMAP_OP_MAP && expression->operator != TSMAP_OP_COMMA)
+		{
+			if (ld->debugContext)
+			{
+				relatedRuleTmp = palloc0(sizeof(TSMapElement));
+				relatedRuleTmp->parent = NULL;
+				relatedRuleTmp->type = TSMAP_EXPRESSION;
+				relatedRuleTmp->value.objectExpression = palloc0(sizeof(TSMapExpression));
+				relatedRuleTmp->value.objectExpression->operator = expression->operator;
+			}
 
-	if (list->head)
-		list->head = list->head->next;
+			resLeft = LexizeExecTSElement(ld, token, expression->left);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->left = token->relatedRule;
 
-	if (list->head == NULL)
-		list->tail = NULL;
+			resRight = LexizeExecTSElement(ld, token, expression->right);
+			if (ld->debugContext)
+				relatedRuleTmp->value.objectExpression->right = token->relatedRule;
+		}
 
-	return res;
-}
+		switch (expression->operator)
+		{
+			case TSMAP_OP_UNION:
+				result = TSLexemeUnion(resLeft, resRight);
+				break;
+			case TSMAP_OP_EXCEPT:
+				result = TSLexemeExcept(resLeft, resRight);
+				break;
+			case TSMAP_OP_INTERSECT:
+				result = TSLexemeIntersect(resLeft, resRight);
+				break;
+			case TSMAP_OP_MAP:
+			case TSMAP_OP_COMMA:
+				result = TSLexemeMap(ld, token, expression);
+				break;
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_DATA_CORRUPTED),
+						 errmsg("text search configuration is invalid"),
+						 errdetail("Text search configuration contains invalid expression operator.")));
+				break;
+		}
 
-static void
-LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
-{
-	ParsedLex  *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+		if (ld->debugContext && relatedRuleTmp != NULL)
+			token->relatedRule = relatedRuleTmp;
+	}
 
-	newpl->type = type;
-	newpl->lemm = lemm;
-	newpl->lenlemm = lenlemm;
-	LPLAddTail(&ld->towork, newpl);
-	ld->curSub = ld->towork.tail;
+	if (!LexemesBufferContains(&ld->buffer, config, token))
+		LexemesBufferAdd(&ld->buffer, config, token, result);
+
+	return result;
 }
 
-static void
-RemoveHead(LexizeData *ld)
+/*-------------------
+ * LexizeExec and helpers functions
+ *-------------------
+ */
+
+/*
+ * Processing of EOF-like token.
+ * Return all temporary results if any are saved.
+ */
+static TSLexeme *
+LexizeExecFinishProcessing(LexizeData *ld)
 {
-	LPLAddTail(&ld->waste, LPLRemoveHead(&ld->towork));
+	int			i;
+	TSLexeme   *res = NULL;
+
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		TSLexeme   *last_res = res;
 
-	ld->posDict = 0;
+		res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+		if (last_res)
+			pfree(last_res);
+	}
+
+	return res;
 }
 
-static void
-setCorrLex(LexizeData *ld, ParsedLex **correspondLexem)
+/*
+ * Get last accepted result of the phrase-dictionary
+ */
+static TSLexeme *
+LexizeExecGetPreviousResults(LexizeData *ld)
 {
-	if (correspondLexem)
-	{
-		*correspondLexem = ld->waste.head;
-	}
-	else
-	{
-		ParsedLex  *tmp,
-				   *ptr = ld->waste.head;
+	int			i;
+	TSLexeme   *res = NULL;
 
-		while (ptr)
+	for (i = 0; i < ld->dslist.listLength; i++)
+	{
+		if (!ld->dslist.states[i].processed)
 		{
-			tmp = ptr->next;
-			pfree(ptr);
-			ptr = tmp;
+			TSLexeme   *last_res = res;
+
+			res = TSLexemeUnion(res, ld->dslist.states[i].tmpResult);
+			if (last_res)
+				pfree(last_res);
 		}
 	}
-	ld->waste.head = ld->waste.tail = NULL;
+
+	return res;
 }
 
+/*
+ * Remove all dictionary states which wasn't used for current token
+ */
 static void
-moveToWaste(LexizeData *ld, ParsedLex *stop)
+LexizeExecClearDictStates(LexizeData *ld)
 {
-	bool		go = true;
+	int			i;
 
-	while (ld->towork.head && go)
+	for (i = 0; i < ld->dslist.listLength; i++)
 	{
-		if (ld->towork.head == stop)
+		if (!ld->dslist.states[i].processed)
 		{
-			ld->curSub = stop->next;
-			go = false;
+			DictStateListRemove(&ld->dslist, ld->dslist.states[i].relatedDictionary);
+			i = 0;
 		}
-		RemoveHead(ld);
 	}
 }
 
-static void
-setNewTmpRes(LexizeData *ld, ParsedLex *lex, TSLexeme *res)
+/*
+ * Check if there are any dictionaries that didn't processed current token
+ */
+static bool
+LexizeExecNotProcessedDictStates(LexizeData *ld)
 {
-	if (ld->tmpRes)
-	{
-		TSLexeme   *ptr;
+	int			i;
 
-		for (ptr = ld->tmpRes; ptr->lexeme; ptr++)
-			pfree(ptr->lexeme);
-		pfree(ld->tmpRes);
-	}
-	ld->tmpRes = res;
-	ld->lastRes = lex;
+	for (i = 0; i < ld->dslist.listLength; i++)
+		if (!ld->dslist.states[i].processed)
+			return true;
+
+	return false;
 }
 
+/*
+ * Do a lexize processing for a towork queue in LexizeData
+ */
 static TSLexeme *
 LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 {
+	ParsedLex  *token;
+	TSMapElement *config;
+	TSLexeme   *res = NULL;
+	TSLexeme   *prevIterationResult = NULL;
+	bool		removeHead = false;
+	bool		resetSkipDictionary = false;
+	bool		accepted = false;
 	int			i;
-	ListDictionary *map;
-	TSDictionaryCacheEntry *dict;
-	TSLexeme   *res;
 
-	if (ld->curDictId == InvalidOid)
+	for (i = 0; i < ld->dslist.listLength; i++)
+		ld->dslist.states[i].processed = false;
+	if (ld->skipDictionary != InvalidOid)
+		resetSkipDictionary = true;
+
+	token = ld->towork.head;
+	if (token == NULL)
 	{
-		/*
-		 * usual mode: dictionary wants only one word, but we should keep in
-		 * mind that we should go through all stack
-		 */
+		setCorrLex(ld, correspondLexem);
+		return NULL;
+	}
 
-		while (ld->towork.head)
+	if (token->type >= ld->cfg->lenmap)
+	{
+		removeHead = true;
+	}
+	else
+	{
+		config = ld->cfg->map[token->type];
+		if (config != NULL)
+		{
+			res = LexizeExecTSElement(ld, token, config);
+			prevIterationResult = LexizeExecGetPreviousResults(ld);
+			removeHead = prevIterationResult == NULL;
+		}
+		else
 		{
-			ParsedLex  *curVal = ld->towork.head;
-			char	   *curValLemm = curVal->lemm;
-			int			curValLenLemm = curVal->lenlemm;
+			removeHead = true;
+			if (token->type == 0)	/* Processing EOF-like token */
+			{
+				res = LexizeExecFinishProcessing(ld);
+				prevIterationResult = NULL;
+			}
+		}
 
-			map = ld->cfg->map + curVal->type;
+		if (LexizeExecNotProcessedDictStates(ld) && (token->type == 0 || config != NULL))	/* Rollback processing */
+		{
+			int			i;
+			ListParsedLex *intermediateTokens = NULL;
+			ListParsedLex *acceptedTokens = NULL;
 
-			if (curVal->type == 0 || curVal->type >= ld->cfg->lenmap || map->len == 0)
+			for (i = 0; i < ld->dslist.listLength; i++)
 			{
-				/* skip this type of lexeme */
-				RemoveHead(ld);
-				continue;
+				if (!ld->dslist.states[i].processed)
+				{
+					intermediateTokens = &ld->dslist.states[i].intermediateTokens;
+					acceptedTokens = &ld->dslist.states[i].acceptedTokens;
+					if (prevIterationResult == NULL)
+						ld->skipDictionary = ld->dslist.states[i].relatedDictionary;
+				}
 			}
 
-			for (i = ld->posDict; i < map->len; i++)
+			if (intermediateTokens && intermediateTokens->head)
 			{
-				dict = lookup_ts_dictionary_cache(map->dictIds[i]);
-
-				ld->dictState.isend = ld->dictState.getnext = false;
-				ld->dictState.private_state = NULL;
-				res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-																 &(dict->lexize),
-																 PointerGetDatum(dict->dictData),
-																 PointerGetDatum(curValLemm),
-																 Int32GetDatum(curValLenLemm),
-																 PointerGetDatum(&ld->dictState)
-																 ));
-
-				if (ld->dictState.getnext)
+				ParsedLex  *head = ld->towork.head;
+
+				ld->towork.head = intermediateTokens->head;
+				intermediateTokens->tail->next = head;
+				head->next = NULL;
+				ld->towork.tail = head;
+				removeHead = false;
+				LPLClear(&ld->waste);
+				if (acceptedTokens && acceptedTokens->head)
 				{
-					/*
-					 * dictionary wants next word, so setup and store current
-					 * position and go to multiword mode
-					 */
-
-					ld->curDictId = DatumGetObjectId(map->dictIds[i]);
-					ld->posDict = i + 1;
-					ld->curSub = curVal->next;
-					if (res)
-						setNewTmpRes(ld, curVal, res);
-					return LexizeExec(ld, correspondLexem);
+					ld->waste.head = acceptedTokens->head;
+					ld->waste.tail = acceptedTokens->tail;
 				}
+			}
+			ResultStorageClearLexemes(&ld->delayedResults);
+			if (config != NULL)
+				res = NULL;
+		}
 
-				if (!res)		/* dictionary doesn't know this lexeme */
-					continue;
+		if (config != NULL)
+			LexizeExecClearDictStates(ld);
+		else if (token->type == 0)
+			DictStateListClear(&ld->dslist);
+	}
 
-				if (res->flags & TSL_FILTER)
-				{
-					curValLemm = res->lexeme;
-					curValLenLemm = strlen(res->lexeme);
-					continue;
-				}
+	if (prevIterationResult)
+		res = prevIterationResult;
+	else
+	{
+		int			i;
 
-				RemoveHead(ld);
-				setCorrLex(ld, correspondLexem);
-				return res;
+		for (i = 0; i < ld->dslist.listLength; i++)
+		{
+			if (ld->dslist.states[i].storeToAccepted)
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].acceptedTokens, token);
+				accepted = true;
+				ld->dslist.states[i].storeToAccepted = false;
+			}
+			else
+			{
+				LPLAddTailCopy(&ld->dslist.states[i].intermediateTokens, token);
 			}
-
-			RemoveHead(ld);
 		}
 	}
-	else
-	{							/* curDictId is valid */
-		dict = lookup_ts_dictionary_cache(ld->curDictId);
 
+	if (removeHead)
+		RemoveHead(ld);
+
+	if (ld->dslist.listLength > 0)
+	{
 		/*
-		 * Dictionary ld->curDictId asks  us about following words
+		 * There is at least one thesaurus dictionary in the middle of
+		 * processing. Delay return of the result to avoid wrong lexemes in
+		 * case of thesaurus phrase rejection.
 		 */
+		ResultStorageAdd(&ld->delayedResults, token, res);
+		if (accepted)
+			ResultStorageMoveToAccepted(&ld->delayedResults);
 
-		while (ld->curSub)
+		/*
+		 * Current value of res should not be cleared, because it is stored in
+		 * LexemesBuffer
+		 */
+		res = NULL;
+	}
+	else
+	{
+		if (ld->towork.head == NULL)
 		{
-			ParsedLex  *curVal = ld->curSub;
-
-			map = ld->cfg->map + curVal->type;
-
-			if (curVal->type != 0)
-			{
-				bool		dictExists = false;
-
-				if (curVal->type >= ld->cfg->lenmap || map->len == 0)
-				{
-					/* skip this type of lexeme */
-					ld->curSub = curVal->next;
-					continue;
-				}
+			TSLexeme   *oldAccepted = ld->delayedResults.accepted;
 
-				/*
-				 * We should be sure that current type of lexeme is recognized
-				 * by our dictionary: we just check is it exist in list of
-				 * dictionaries ?
-				 */
-				for (i = 0; i < map->len && !dictExists; i++)
-					if (ld->curDictId == DatumGetObjectId(map->dictIds[i]))
-						dictExists = true;
-
-				if (!dictExists)
-				{
-					/*
-					 * Dictionary can't work with current tpe of lexeme,
-					 * return to basic mode and redo all stored lexemes
-					 */
-					ld->curDictId = InvalidOid;
-					return LexizeExec(ld, correspondLexem);
-				}
-			}
+			ld->delayedResults.accepted = TSLexemeUnionOpt(ld->delayedResults.accepted, ld->delayedResults.lexemes, true);
+			if (oldAccepted)
+				pfree(oldAccepted);
+		}
 
-			ld->dictState.isend = (curVal->type == 0) ? true : false;
-			ld->dictState.getnext = false;
+		/*
+		 * Add accepted delayed results to the output of the parsing. All
+		 * lexemes returned during thesaurus phrase processing should be
+		 * returned simultaneously, since all phrase tokens are processed as
+		 * one.
+		 */
+		if (ld->delayedResults.accepted != NULL)
+		{
+			/*
+			 * Previous value of res should not be cleared, because it is
+			 * stored in LexemesBuffer
+			 */
+			res = TSLexemeUnionOpt(ld->delayedResults.accepted, res, prevIterationResult == NULL);
 
-			res = (TSLexeme *) DatumGetPointer(FunctionCall4(
-															 &(dict->lexize),
-															 PointerGetDatum(dict->dictData),
-															 PointerGetDatum(curVal->lemm),
-															 Int32GetDatum(curVal->lenlemm),
-															 PointerGetDatum(&ld->dictState)
-															 ));
+			ResultStorageClearLexemes(&ld->delayedResults);
+			ResultStorageClearAccepted(&ld->delayedResults);
+		}
+		setCorrLex(ld, correspondLexem);
+	}
 
-			if (ld->dictState.getnext)
-			{
-				/* Dictionary wants one more */
-				ld->curSub = curVal->next;
-				if (res)
-					setNewTmpRes(ld, curVal, res);
-				continue;
-			}
+	if (resetSkipDictionary)
+		ld->skipDictionary = InvalidOid;
 
-			if (res || ld->tmpRes)
-			{
-				/*
-				 * Dictionary normalizes lexemes, so we remove from stack all
-				 * used lexemes, return to basic mode and redo end of stack
-				 * (if it exists)
-				 */
-				if (res)
-				{
-					moveToWaste(ld, ld->curSub);
-				}
-				else
-				{
-					res = ld->tmpRes;
-					moveToWaste(ld, ld->lastRes);
-				}
+	res = TSLexemeFilterMulti(res);
+	if (res)
+		res = TSLexemeRemoveDuplications(res);
 
-				/* reset to initial state */
-				ld->curDictId = InvalidOid;
-				ld->posDict = 0;
-				ld->lastRes = NULL;
-				ld->tmpRes = NULL;
-				setCorrLex(ld, correspondLexem);
-				return res;
-			}
+	/*
+	 * Copy result since it may be stored in LexemesBuffere and removed at the
+	 * next step.
+	 */
+	if (res)
+	{
+		TSLexeme   *oldRes = res;
+		int			resSize = TSLexemeGetSize(res);
 
-			/*
-			 * Dict don't want next lexem and didn't recognize anything, redo
-			 * from ld->towork.head
-			 */
-			ld->curDictId = InvalidOid;
-			return LexizeExec(ld, correspondLexem);
-		}
+		res = palloc0(sizeof(TSLexeme) * (resSize + 1));
+		memcpy(res, oldRes, sizeof(TSLexeme) * resSize);
 	}
 
-	setCorrLex(ld, correspondLexem);
-	return NULL;
+	LexemesBufferClear(&ld->buffer);
+	return res;
 }
 
+/*-------------------
+ * ts_parse API functions
+ *-------------------
+ */
+
 /*
  * Parse string and lexize words.
  *
@@ -357,7 +1473,7 @@ LexizeExec(LexizeData *ld, ParsedLex **correspondLexem)
 void
 parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
@@ -375,36 +1491,42 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		while ((norms = LexizeExec(&ldata, NULL)) != NULL)
 		{
-			TSLexeme   *ptr = norms;
+			TSLexeme   *ptr;
+
+			ptr = norms;
 
 			prs->pos++;			/* set pos */
 
@@ -429,14 +1551,246 @@ parsetext(Oid cfgId, ParsedText *prs, char *buf, int buflen)
 			}
 			pfree(norms);
 		}
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
 
+/*-------------------
+ * ts_debug and helper functions
+ *-------------------
+ */
+
+/*
+ * Free memory occupied by temporary TSMapElement
+ */
+
+static void
+ts_debug_free_rule(TSMapElement *element)
+{
+	if (element != NULL && element->type == TSMAP_EXPRESSION)
+	{
+		ts_debug_free_rule(element->value.objectExpression->left);
+		ts_debug_free_rule(element->value.objectExpression->right);
+		pfree(element->value.objectExpression);
+		pfree(element);
+	}
+}
+
+/*
+ * Initialize SRF context and text parser for ts_debug execution.
+ */
+static void
+ts_debug_init(Oid cfgId, text *inputText, FunctionCallInfo fcinfo)
+{
+	TupleDesc	tupdesc;
+	char	   *buf;
+	int			buflen;
+	FuncCallContext *funcctx;
+	MemoryContext oldcontext;
+	TSDebugContext *context;
+
+	funcctx = SRF_FIRSTCALL_INIT();
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+	buf = text_to_cstring(inputText);
+	buflen = strlen(buf);
+
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("function returning record called in context "
+						"that cannot accept type record")));
+
+	funcctx->user_fctx = palloc0(sizeof(TSDebugContext));
+	funcctx->attinmeta = TupleDescGetAttInMetadata(tupdesc);
+
+	context = funcctx->user_fctx;
+	context->cfg = lookup_ts_config_cache(cfgId);
+	context->prsobj = lookup_ts_parser_cache(context->cfg->prsId);
+
+	context->tokenTypes = (LexDescr *) DatumGetPointer(OidFunctionCall1(context->prsobj->lextypeOid,
+																		(Datum) 0));
+
+	context->prsdata = (void *) DatumGetPointer(FunctionCall2(&context->prsobj->prsstart,
+															  PointerGetDatum(buf),
+															  Int32GetDatum(buflen)));
+	LexizeInit(&context->ldata, context->cfg);
+	context->ldata.debugContext = true;
+	context->tokentype = 1;
+
+	MemoryContextSwitchTo(oldcontext);
+}
+
+/*
+ * Get one token from input text and add it to processing queue.
+ */
+static void
+ts_debug_get_token(FuncCallContext *funcctx)
+{
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+	int			lenlemm;
+	char	   *lemm = NULL;
+
+	context = funcctx->user_fctx;
+
+	oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+	context->tokentype = DatumGetInt32(FunctionCall3(&(context->prsobj->prstoken),
+													 PointerGetDatum(context->prsdata),
+													 PointerGetDatum(&lemm),
+													 PointerGetDatum(&lenlemm)));
+
+	if (context->tokentype > 0 && lenlemm >= MAXSTRLEN)
+	{
+#ifdef IGNORE_LONGLEXEME
+		ereport(NOTICE,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#else
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("word is too long to be indexed"),
+				 errdetail("Words longer than %d characters are ignored.",
+						   MAXSTRLEN)));
+#endif
+	}
+
+	LexizeAddLemm(&context->ldata, context->tokentype, lemm, lenlemm);
+	MemoryContextSwitchTo(oldcontext);
+}
+
 /*
+ * Parse text and print debug information, such as token type, dictionary map
+ * configuration, selected command and lexemes for each token.
+ * Arguments: regconfiguration(Oid) cfgId, text *inputText
+ */
+Datum
+ts_debug(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	TSDebugContext *context;
+	MemoryContext oldcontext;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		Oid			cfgId = PG_GETARG_OID(0);
+		text	   *inputText = PG_GETARG_TEXT_P(1);
+
+		ts_debug_init(cfgId, inputText, fcinfo);
+	}
+
+	funcctx = SRF_PERCALL_SETUP();
+	context = funcctx->user_fctx;
+
+	while (context->tokentype > 0 && context->leftTokens == NULL)
+	{
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+		ts_debug_get_token(funcctx);
+
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	while (context->leftTokens == NULL && context->ldata.towork.head != NULL)
+		context->savedLexemes = LexizeExec(&context->ldata, &(context->leftTokens));
+
+	if (context->leftTokens && context->leftTokens && context->leftTokens->type > 0)
+	{
+		HeapTuple	tuple;
+		Datum		result;
+		char	  **values;
+		ParsedLex  *lex = context->leftTokens;
+		StringInfo	str = NULL;
+		TSLexeme   *ptr;
+
+		values = palloc0(sizeof(char *) * 7);
+		str = makeStringInfo();
+		initStringInfo(str);
+
+		values[0] = context->tokenTypes[lex->type - 1].alias;
+		values[1] = context->tokenTypes[lex->type - 1].descr;
+
+		values[2] = palloc0(sizeof(char) * (lex->lenlemm + 1));
+		memcpy(values[2], lex->lemm, sizeof(char) * lex->lenlemm);
+
+		initStringInfo(str);
+		appendStringInfoChar(str, '{');
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			Oid *dictionaries = TSMapGetDictionaries(context->ldata.cfg->map[lex->type]);
+			Oid *currentDictionary = NULL;
+			for (currentDictionary = dictionaries; *currentDictionary != InvalidOid; currentDictionary++)
+			{
+				if (currentDictionary != dictionaries)
+					appendStringInfoChar(str, ',');
+
+				TSMapPrintDictName(*currentDictionary, str);
+			}
+		}
+		appendStringInfoChar(str, '}');
+		values[3] = str->data;
+
+		if (lex->type < context->ldata.cfg->lenmap && context->ldata.cfg->map[lex->type])
+		{
+			initStringInfo(str);
+			TSMapPrintElement(context->ldata.cfg->map[lex->type], str);
+			values[4] = str->data;
+
+			initStringInfo(str);
+			if (lex->relatedRule)
+			{
+				TSMapPrintElement(lex->relatedRule, str);
+				values[5] = str->data;
+				str = makeStringInfo();
+				initStringInfo(str);
+				ts_debug_free_rule(lex->relatedRule);
+				lex->relatedRule = NULL;
+			}
+		}
+
+		initStringInfo(str);
+		ptr = context->savedLexemes;
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '{');
+
+		while (ptr && ptr->lexeme)
+		{
+			if (ptr != context->savedLexemes)
+				appendStringInfoString(str, ", ");
+			appendStringInfoString(str, ptr->lexeme);
+			ptr++;
+		}
+		if (context->savedLexemes)
+			appendStringInfoChar(str, '}');
+		if (context->savedLexemes)
+			values[6] = str->data;
+		else
+			values[6] = NULL;
+
+		tuple = BuildTupleFromCStrings(funcctx->attinmeta, values);
+		result = HeapTupleGetDatum(tuple);
+
+		context->leftTokens = lex->next;
+		pfree(lex);
+		if (context->leftTokens == NULL && context->savedLexemes)
+			pfree(context->savedLexemes);
+
+		SRF_RETURN_NEXT(funcctx, result);
+	}
+
+	FunctionCall1(&(context->prsobj->prsend), PointerGetDatum(context->prsdata));
+	SRF_RETURN_DONE(funcctx);
+}
+
+/*-------------------
  * Headline framework
+ *-------------------
  */
+
 static void
 hladdword(HeadlineParsedText *prs, char *buf, int buflen, int type)
 {
@@ -532,12 +1886,12 @@ addHLParsedLex(HeadlineParsedText *prs, TSQuery query, ParsedLex *lexs, TSLexeme
 void
 hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int buflen)
 {
-	int			type,
+	int			type = -1,
 				lenlemm;
 	char	   *lemm = NULL;
 	LexizeData	ldata;
 	TSLexeme   *norms;
-	ParsedLex  *lexs;
+	ParsedLex  *lexs = NULL;
 	TSConfigCacheEntry *cfg;
 	TSParserCacheEntry *prsobj;
 	void	   *prsdata;
@@ -551,32 +1905,36 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 
 	LexizeInit(&ldata, cfg);
 
+	type = 1;
 	do
 	{
-		type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
-										   PointerGetDatum(prsdata),
-										   PointerGetDatum(&lemm),
-										   PointerGetDatum(&lenlemm)));
-
-		if (type > 0 && lenlemm >= MAXSTRLEN)
+		if (type > 0)
 		{
+			type = DatumGetInt32(FunctionCall3(&(prsobj->prstoken),
+											   PointerGetDatum(prsdata),
+											   PointerGetDatum(&lemm),
+											   PointerGetDatum(&lenlemm)));
+
+			if (type > 0 && lenlemm >= MAXSTRLEN)
+			{
 #ifdef IGNORE_LONGLEXEME
-			ereport(NOTICE,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
-			continue;
+				ereport(NOTICE,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
+				continue;
 #else
-			ereport(ERROR,
-					(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
-					 errmsg("word is too long to be indexed"),
-					 errdetail("Words longer than %d characters are ignored.",
-							   MAXSTRLEN)));
+				ereport(ERROR,
+						(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+						 errmsg("word is too long to be indexed"),
+						 errdetail("Words longer than %d characters are ignored.",
+								   MAXSTRLEN)));
 #endif
-		}
+			}
 
-		LexizeAddLemm(&ldata, type, lemm, lenlemm);
+			LexizeAddLemm(&ldata, type, lemm, lenlemm);
+		}
 
 		do
 		{
@@ -587,9 +1945,10 @@ hlparsetext(Oid cfgId, HeadlineParsedText *prs, TSQuery query, char *buf, int bu
 			}
 			else
 				addHLParsedLex(prs, query, lexs, NULL);
+			lexs = NULL;
 		} while (norms);
 
-	} while (type > 0);
+	} while (type > 0 || ldata.towork.head);
 
 	FunctionCall1(&(prsobj->prsend), PointerGetDatum(prsdata));
 }
@@ -642,14 +2001,14 @@ generateHeadline(HeadlineParsedText *prs)
 			}
 			else if (!wrd->skip)
 			{
-				if (wrd->selected)
+				if (wrd->selected && (wrd == prs->words || !(wrd - 1)->selected))
 				{
 					memcpy(ptr, prs->startsel, prs->startsellen);
 					ptr += prs->startsellen;
 				}
 				memcpy(ptr, wrd->word, wrd->len);
 				ptr += wrd->len;
-				if (wrd->selected)
+				if (wrd->selected && ((wrd + 1 - prs->words) == prs->curwords || !(wrd + 1)->selected))
 				{
 					memcpy(ptr, prs->stopsel, prs->stopsellen);
 					ptr += prs->stopsellen;
diff --git a/src/backend/tsearch/ts_utils.c b/src/backend/tsearch/ts_utils.c
index f6e03aea4f..0dd846bece 100644
--- a/src/backend/tsearch/ts_utils.c
+++ b/src/backend/tsearch/ts_utils.c
@@ -20,7 +20,6 @@
 #include "tsearch/ts_locale.h"
 #include "tsearch/ts_utils.h"
 
-
 /*
  * Given the base name and extension of a tsearch config file, return
  * its full path name.  The base name is assumed to be user-supplied,
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 2b381782a3..f251e83ff6 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -828,11 +828,10 @@ static const struct cachedesc cacheinfo[] = {
 	},
 	{TSConfigMapRelationId,		/* TSCONFIGMAP */
 		TSConfigMapIndexId,
-		3,
+		2,
 		{
 			Anum_pg_ts_config_map_mapcfg,
 			Anum_pg_ts_config_map_maptokentype,
-			Anum_pg_ts_config_map_mapseqno,
 			0
 		},
 		2
diff --git a/src/backend/utils/cache/ts_cache.c b/src/backend/utils/cache/ts_cache.c
index f11cba4cce..c0f98bad30 100644
--- a/src/backend/utils/cache/ts_cache.c
+++ b/src/backend/utils/cache/ts_cache.c
@@ -39,6 +39,7 @@
 #include "catalog/pg_ts_template.h"
 #include "commands/defrem.h"
 #include "tsearch/ts_cache.h"
+#include "tsearch/ts_configmap.h"
 #include "utils/builtins.h"
 #include "utils/catcache.h"
 #include "utils/fmgroids.h"
@@ -51,13 +52,12 @@
 
 
 /*
- * MAXTOKENTYPE/MAXDICTSPERTT are arbitrary limits on the workspace size
+ * MAXTOKENTYPE is arbitrary limits on the workspace size
  * used in lookup_ts_config_cache().  We could avoid hardwiring a limit
  * by making the workspace dynamically enlargeable, but it seems unlikely
  * to be worth the trouble.
  */
-#define MAXTOKENTYPE	256
-#define MAXDICTSPERTT	100
+#define MAXTOKENTYPE		256
 
 
 static HTAB *TSParserCacheHash = NULL;
@@ -418,11 +418,10 @@ lookup_ts_config_cache(Oid cfgId)
 		ScanKeyData mapskey;
 		SysScanDesc mapscan;
 		HeapTuple	maptup;
-		ListDictionary maplists[MAXTOKENTYPE + 1];
-		Oid			mapdicts[MAXDICTSPERTT];
+		TSMapElement *mapconfigs[MAXTOKENTYPE + 1];
 		int			maxtokentype;
-		int			ndicts;
 		int			i;
+		TSMapElement *tmpConfig;
 
 		tp = SearchSysCache1(TSCONFIGOID, ObjectIdGetDatum(cfgId));
 		if (!HeapTupleIsValid(tp))
@@ -453,8 +452,8 @@ lookup_ts_config_cache(Oid cfgId)
 			if (entry->map)
 			{
 				for (i = 0; i < entry->lenmap; i++)
-					if (entry->map[i].dictIds)
-						pfree(entry->map[i].dictIds);
+					if (entry->map[i])
+						TSMapElementFree(entry->map[i]);
 				pfree(entry->map);
 			}
 		}
@@ -468,13 +467,11 @@ lookup_ts_config_cache(Oid cfgId)
 		/*
 		 * Scan pg_ts_config_map to gather dictionary list for each token type
 		 *
-		 * Because the index is on (mapcfg, maptokentype, mapseqno), we will
-		 * see the entries in maptokentype order, and in mapseqno order for
-		 * each token type, even though we didn't explicitly ask for that.
+		 * Because the index is on (mapcfg, maptokentype), we will see the
+		 * entries in maptokentype order even though we didn't explicitly ask
+		 * for that.
 		 */
-		MemSet(maplists, 0, sizeof(maplists));
 		maxtokentype = 0;
-		ndicts = 0;
 
 		ScanKeyInit(&mapskey,
 					Anum_pg_ts_config_map_mapcfg,
@@ -486,6 +483,7 @@ lookup_ts_config_cache(Oid cfgId)
 		mapscan = systable_beginscan_ordered(maprel, mapidx,
 											 NULL, 1, &mapskey);
 
+		memset(mapconfigs, 0, sizeof(mapconfigs));
 		while ((maptup = systable_getnext_ordered(mapscan, ForwardScanDirection)) != NULL)
 		{
 			Form_pg_ts_config_map cfgmap = (Form_pg_ts_config_map) GETSTRUCT(maptup);
@@ -495,51 +493,27 @@ lookup_ts_config_cache(Oid cfgId)
 				elog(ERROR, "maptokentype value %d is out of range", toktype);
 			if (toktype < maxtokentype)
 				elog(ERROR, "maptokentype entries are out of order");
-			if (toktype > maxtokentype)
-			{
-				/* starting a new token type, but first save the prior data */
-				if (ndicts > 0)
-				{
-					maplists[maxtokentype].len = ndicts;
-					maplists[maxtokentype].dictIds = (Oid *)
-						MemoryContextAlloc(CacheMemoryContext,
-										   sizeof(Oid) * ndicts);
-					memcpy(maplists[maxtokentype].dictIds, mapdicts,
-						   sizeof(Oid) * ndicts);
-				}
-				maxtokentype = toktype;
-				mapdicts[0] = cfgmap->mapdict;
-				ndicts = 1;
-			}
-			else
-			{
-				/* continuing data for current token type */
-				if (ndicts >= MAXDICTSPERTT)
-					elog(ERROR, "too many pg_ts_config_map entries for one token type");
-				mapdicts[ndicts++] = cfgmap->mapdict;
-			}
+
+			maxtokentype = toktype;
+			tmpConfig = JsonbToTSMap(DatumGetJsonbP(&cfgmap->mapdicts));
+			mapconfigs[maxtokentype] = TSMapMoveToMemoryContext(tmpConfig, CacheMemoryContext);
+			TSMapElementFree(tmpConfig);
+			tmpConfig = NULL;
 		}
 
 		systable_endscan_ordered(mapscan);
 		index_close(mapidx, AccessShareLock);
 		heap_close(maprel, AccessShareLock);
 
-		if (ndicts > 0)
+		if (maxtokentype > 0)
 		{
-			/* save the last token type's dictionaries */
-			maplists[maxtokentype].len = ndicts;
-			maplists[maxtokentype].dictIds = (Oid *)
-				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(Oid) * ndicts);
-			memcpy(maplists[maxtokentype].dictIds, mapdicts,
-				   sizeof(Oid) * ndicts);
-			/* and save the overall map */
+			/* save the overall map */
 			entry->lenmap = maxtokentype + 1;
-			entry->map = (ListDictionary *)
+			entry->map = (TSMapElement * *)
 				MemoryContextAlloc(CacheMemoryContext,
-								   sizeof(ListDictionary) * entry->lenmap);
-			memcpy(entry->map, maplists,
-				   sizeof(ListDictionary) * entry->lenmap);
+								   sizeof(TSMapElement *) * entry->lenmap);
+			memcpy(entry->map, mapconfigs,
+				   sizeof(TSMapElement *) * entry->lenmap);
 		}
 
 		entry->isvalid = true;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 86524d6598..6a1258b096 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -14298,15 +14298,29 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo)
 	PQclear(res);
 
 	resetPQExpBuffer(query);
-	appendPQExpBuffer(query,
-					  "SELECT\n"
-					  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
-					  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
-					  "FROM pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE m.mapcfg = '%u'\n"
-					  "ORDER BY m.mapcfg, m.maptokentype, m.mapseqno",
-					  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+
+	if (fout->remoteVersion >= 120000)
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
+	else
+		appendPQExpBuffer(query,
+						  "SELECT\n"
+						  "  ( SELECT alias FROM pg_catalog.ts_token_type('%u'::pg_catalog.oid) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS tokenname,\n"
+						  "  m.mapdict::pg_catalog.regdictionary AS dictname\n"
+						  "FROM pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE m.mapcfg = '%u'\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, m.mapseqno\n"
+						  "ORDER BY m.mapcfg, m.maptokentype",
+						  cfginfo->cfgparser, cfginfo->dobj.catId.oid);
 
 	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
 	ntups = PQntuples(res);
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index c3bdf8555d..b3345c2b44 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -4684,25 +4684,41 @@ describeOneTSConfig(const char *oid, const char *nspname, const char *cfgname,
 
 	initPQExpBuffer(&buf);
 
-	printfPQExpBuffer(&buf,
-					  "SELECT\n"
-					  "  ( SELECT t.alias FROM\n"
-					  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
-					  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
-					  "  pg_catalog.btrim(\n"
-					  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
-					  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
-					  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
-					  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
-					  "    ) :: pg_catalog.text,\n"
-					  "  '{}') AS \"%s\"\n"
-					  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
-					  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
-					  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
-					  "ORDER BY 1;",
-					  gettext_noop("Token"),
-					  gettext_noop("Dictionaries"),
-					  oid);
+	if (pset.sversion >= 120000)
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  " dictionary_mapping_to_text(m.mapcfg, m.maptokentype) AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+	else
+		printfPQExpBuffer(&buf,
+						  "SELECT\n"
+						  "  ( SELECT t.alias FROM\n"
+						  "    pg_catalog.ts_token_type(c.cfgparser) AS t\n"
+						  "    WHERE t.tokid = m.maptokentype ) AS \"%s\",\n"
+						  "  pg_catalog.btrim(\n"
+						  "    ARRAY( SELECT mm.mapdict::pg_catalog.regdictionary\n"
+						  "           FROM pg_catalog.pg_ts_config_map AS mm\n"
+						  "           WHERE mm.mapcfg = m.mapcfg AND mm.maptokentype = m.maptokentype\n"
+						  "           ORDER BY mapcfg, maptokentype, mapseqno\n"
+						  "    ) :: pg_catalog.text,\n"
+						  "  '{}') AS \"%s\"\n"
+						  "FROM pg_catalog.pg_ts_config AS c, pg_catalog.pg_ts_config_map AS m\n"
+						  "WHERE c.oid = '%s' AND m.mapcfg = c.oid\n"
+						  "GROUP BY m.mapcfg, m.maptokentype, c.cfgparser\n"
+						  "ORDER BY 1;",
+						  gettext_noop("Token"),
+						  gettext_noop("Dictionaries"),
+						  oid);
+
 
 	res = PSQLexec(buf.data);
 	termPQExpBuffer(&buf);
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index 24915824ca..2e9e496692 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -261,7 +261,7 @@ DECLARE_UNIQUE_INDEX(pg_ts_config_cfgname_index, 3608, on pg_ts_config using btr
 DECLARE_UNIQUE_INDEX(pg_ts_config_oid_index, 3712, on pg_ts_config using btree(oid oid_ops));
 #define TSConfigOidIndexId	3712
 
-DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops, mapseqno int4_ops));
+DECLARE_UNIQUE_INDEX(pg_ts_config_map_index, 3609, on pg_ts_config_map using btree(mapcfg oid_ops, maptokentype int4_ops));
 #define TSConfigMapIndexId	3609
 
 DECLARE_UNIQUE_INDEX(pg_ts_dict_dictname_index, 3604, on pg_ts_dict using btree(dictname name_ops, dictnamespace oid_ops));
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index a14651010f..876f6372b7 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -9023,6 +9023,19 @@
   prorettype => 'regconfig', proargtypes => '',
   prosrc => 'get_current_ts_config' },
 
+{ oid => '8891', descr => 'returns text representation of dictionary configuration map',
+  proname => 'dictionary_mapping_to_text', provolatile => 's',
+  prorettype => 'text', proargtypes => 'regconfig int4',
+  prosrc => 'dictionary_mapping_to_text' },
+
+{ oid => '8892', descr => 'debug function for a text search configuration',
+  proname => 'ts_debug', provolatile => 's',
+  prorettype => 'record', proargtypes => 'regconfig text',
+  proallargtypes => '{regconfig,text,text,text,text,_regdictionary,text,text,_text}',
+  proargmodes => '{i,i,o,o,o,o,o,o,o}',
+  proargnames => '{ftsconfig,inputext,alias,description,token,dictionaries,configuration,command,lexemes}',
+  prosrc => 'ts_debug' },
+
 { oid => '3736', descr => 'I/O',
   proname => 'regconfigin', provolatile => 's', prorettype => 'regconfig',
   proargtypes => 'cstring', prosrc => 'regconfigin' },
diff --git a/src/include/catalog/pg_ts_config_map.dat b/src/include/catalog/pg_ts_config_map.dat
index 097a9f5e6d..16982dfa98 100644
--- a/src/include/catalog/pg_ts_config_map.dat
+++ b/src/include/catalog/pg_ts_config_map.dat
@@ -12,24 +12,24 @@
 
 [
 
-{ mapcfg => '3748', maptokentype => '1', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '2', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '3', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '4', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '5', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '6', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '7', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '8', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '9', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '10', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '11', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '15', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '16', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '17', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '18', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '19', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '20', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '21', mapseqno => '1', mapdict => '3765' },
-{ mapcfg => '3748', maptokentype => '22', mapseqno => '1', mapdict => '3765' },
+{ mapcfg => '3748', maptokentype => '1', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '2', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '3', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '4', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '5', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '6', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '7', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '8', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '9', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '10', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '11', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '15', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '16', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '17', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '18', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '19', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '20', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '21', mapdicts => '[3765]' },
+{ mapcfg => '3748', maptokentype => '22', mapdicts => '[3765]' },
 
 ]
diff --git a/src/include/catalog/pg_ts_config_map.h b/src/include/catalog/pg_ts_config_map.h
index 5856323373..9298fa86f1 100644
--- a/src/include/catalog/pg_ts_config_map.h
+++ b/src/include/catalog/pg_ts_config_map.h
@@ -20,6 +20,7 @@
 #define PG_TS_CONFIG_MAP_H
 
 #include "catalog/genbki.h"
+#include "utils/jsonb.h"
 #include "catalog/pg_ts_config_map_d.h"
 
 /* ----------------
@@ -27,14 +28,91 @@
  *		typedef struct FormData_pg_ts_config_map
  * ----------------
  */
+#define TSConfigMapRelationId	3603
+
+/*
+ * Create a typedef in order to use same type name in
+ * generated DB initialization script and C source code
+ */
+typedef Jsonb jsonb;
+
 CATALOG(pg_ts_config_map,3603,TSConfigMapRelationId) BKI_WITHOUT_OIDS
 {
 	Oid			mapcfg;			/* OID of configuration owning this entry */
 	int32		maptokentype;	/* token type from parser */
-	int32		mapseqno;		/* order in which to consult dictionaries */
-	Oid			mapdict;		/* dictionary to consult */
+
+	/*
+	 * mapdicts is the only one variable-length field so it is safe to use
+	 * it directly, without hiding from C interface.
+	 */
+	jsonb		mapdicts;		/* dictionary map Jsonb representation */
 } FormData_pg_ts_config_map;
 
 typedef FormData_pg_ts_config_map *Form_pg_ts_config_map;
 
+/*
+ * Element of the mapping expression tree
+ */
+typedef struct TSMapElement
+{
+	int			type; /* Type of the element */
+	union
+	{
+		struct TSMapExpression *objectExpression;
+		struct TSMapCase *objectCase;
+		Oid			objectDictionary;
+		void	   *object;
+	} value;
+	struct TSMapElement *parent; /* Parent in the expression tree */
+} TSMapElement;
+
+/*
+ * Representation of expression with operator and two operands
+ */
+typedef struct TSMapExpression
+{
+	int			operator;
+	TSMapElement *left;
+	TSMapElement *right;
+} TSMapExpression;
+
+/*
+ * Representation of CASE structure inside database
+ */
+typedef struct TSMapCase
+{
+	TSMapElement *condition;
+	TSMapElement *command;
+	TSMapElement *elsebranch;
+	bool		match;	/* If false, NO MATCH is used */
+} TSMapCase;
+
+/* ----------------
+ *		Compiler constants for pg_ts_config_map
+ * ----------------
+ */
+#define Natts_pg_ts_config_map				3
+#define Anum_pg_ts_config_map_mapcfg		1
+#define Anum_pg_ts_config_map_maptokentype	2
+#define Anum_pg_ts_config_map_mapdicts		3
+
+/* ----------------
+ *		Dictionary map operators
+ * ----------------
+ */
+#define TSMAP_OP_MAP			1
+#define TSMAP_OP_UNION			2
+#define TSMAP_OP_EXCEPT			3
+#define TSMAP_OP_INTERSECT		4
+#define TSMAP_OP_COMMA			5
+
+/* ----------------
+ *		TSMapElement object types
+ * ----------------
+ */
+#define TSMAP_EXPRESSION	1
+#define TSMAP_CASE			2
+#define TSMAP_DICTIONARY	3
+#define TSMAP_KEEP			4
+
 #endif							/* PG_TS_CONFIG_MAP_H */
diff --git a/src/include/catalog/toasting.h b/src/include/catalog/toasting.h
index f259890e43..bdee176cf2 100644
--- a/src/include/catalog/toasting.h
+++ b/src/include/catalog/toasting.h
@@ -71,6 +71,8 @@ DECLARE_TOAST(pg_trigger, 2336, 2337);
 DECLARE_TOAST(pg_ts_dict, 4169, 4170);
 DECLARE_TOAST(pg_type, 4171, 4172);
 DECLARE_TOAST(pg_user_mapping, 4173, 4174);
+DECLARE_TOAST(pg_ts_config_map, 4187, 4188);
+
 
 /* shared catalogs */
 DECLARE_TOAST(pg_authid, 4175, 4176);
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 43f1552241..3e115404b4 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -384,6 +384,9 @@ typedef enum NodeTag
 	T_CreateEnumStmt,
 	T_CreateRangeStmt,
 	T_AlterEnumStmt,
+	T_DictMapExprElem,
+	T_DictMapElem,
+	T_DictMapCase,
 	T_AlterTSDictionaryStmt,
 	T_AlterTSConfigurationStmt,
 	T_CreateFdwStmt,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 9a5d91a198..e4e3194eb3 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3412,6 +3412,50 @@ typedef enum AlterTSConfigType
 	ALTER_TSCONFIG_DROP_MAPPING
 } AlterTSConfigType;
 
+/*
+ * TS Configuration expression tree element's types
+ */
+typedef enum DictMapElemType
+{
+	DICT_MAP_CASE,
+	DICT_MAP_EXPRESSION,
+	DICT_MAP_KEEP,
+	DICT_MAP_DICTIONARY
+} DictMapElemType;
+
+/*
+ * TS Configuration expression tree abstract element
+ */
+typedef struct DictMapElem
+{
+	NodeTag		type;
+	int8		kind;			/* See DictMapElemType */
+	void	   *data;			/* Type should be detected by kind value */
+} DictMapElem;
+
+/*
+ * TS Configuration expression tree element with operator and operands
+ */
+typedef struct DictMapExprElem
+{
+	NodeTag		type;
+	DictMapElem *left;
+	DictMapElem *right;
+	int8		oper;
+} DictMapExprElem;
+
+/*
+ * TS Configuration expression tree CASE element
+ */
+typedef struct DictMapCase
+{
+	NodeTag		type;
+	struct DictMapElem *condition;
+	struct DictMapElem *command;
+	struct DictMapElem *elsebranch;
+	bool		match;
+} DictMapCase;
+
 typedef struct AlterTSConfigurationStmt
 {
 	NodeTag		type;
@@ -3424,6 +3468,7 @@ typedef struct AlterTSConfigurationStmt
 	 */
 	List	   *tokentype;		/* list of Value strings */
 	List	   *dicts;			/* list of list of Value strings */
+	DictMapElem *dict_map;		/* tree of the mapping expression */
 	bool		override;		/* if true - remove old variant */
 	bool		replace;		/* if true - replace dictionary by another */
 	bool		missing_ok;		/* for DROP - skip error if missing? */
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 23db40147b..1f58c319e8 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -221,6 +221,7 @@ PG_KEYWORD("is", IS, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isnull", ISNULL, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("isolation", ISOLATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("join", JOIN, TYPE_FUNC_NAME_KEYWORD)
+PG_KEYWORD("keep", KEEP, RESERVED_KEYWORD)
 PG_KEYWORD("key", KEY, UNRESERVED_KEYWORD)
 PG_KEYWORD("label", LABEL, UNRESERVED_KEYWORD)
 PG_KEYWORD("language", LANGUAGE, UNRESERVED_KEYWORD)
@@ -243,6 +244,7 @@ PG_KEYWORD("location", LOCATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("lock", LOCK_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("locked", LOCKED, UNRESERVED_KEYWORD)
 PG_KEYWORD("logged", LOGGED, UNRESERVED_KEYWORD)
+PG_KEYWORD("map", MAP, UNRESERVED_KEYWORD)
 PG_KEYWORD("mapping", MAPPING, UNRESERVED_KEYWORD)
 PG_KEYWORD("match", MATCH, UNRESERVED_KEYWORD)
 PG_KEYWORD("materialized", MATERIALIZED, UNRESERVED_KEYWORD)
diff --git a/src/include/tsearch/ts_cache.h b/src/include/tsearch/ts_cache.h
index 410f1d54af..4633dd7618 100644
--- a/src/include/tsearch/ts_cache.h
+++ b/src/include/tsearch/ts_cache.h
@@ -14,6 +14,7 @@
 #define TS_CACHE_H
 
 #include "utils/guc.h"
+#include "catalog/pg_ts_config_map.h"
 
 
 /*
@@ -66,6 +67,7 @@ typedef struct
 {
 	int			len;
 	Oid		   *dictIds;
+	int32	   *dictOptions;
 } ListDictionary;
 
 typedef struct
@@ -77,7 +79,7 @@ typedef struct
 	Oid			prsId;
 
 	int			lenmap;
-	ListDictionary *map;
+	TSMapElement **map;
 } TSConfigCacheEntry;
 
 
diff --git a/src/include/tsearch/ts_configmap.h b/src/include/tsearch/ts_configmap.h
new file mode 100644
index 0000000000..79e618052e
--- /dev/null
+++ b/src/include/tsearch/ts_configmap.h
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * ts_configmap.h
+ *	  internal representation of text search configuration and utilities for it
+ *
+ * Copyright (c) 1998-2018, PostgreSQL Global Development Group
+ *
+ * src/include/tsearch/ts_utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _PG_TS_CONFIGMAP_H_
+#define _PG_TS_CONFIGMAP_H_
+
+#include "utils/jsonb.h"
+#include "catalog/pg_ts_config_map.h"
+
+/*
+ * Configuration storage functions
+ * Provide interface to convert ts_configuration into JSONB and vice versa
+ */
+
+/* Convert TSMapElement structure into JSONB */
+extern Jsonb *TSMapToJsonb(TSMapElement *config);
+
+/* Extract TSMapElement from JSONB formated data */
+extern TSMapElement *JsonbToTSMap(Jsonb *json);
+/* Replace all occurances of oldDict by newDict */
+extern void TSMapReplaceDictionary(TSMapElement *config, Oid oldDict, Oid newDict);
+
+/* Move rule list into specified memory context */
+extern TSMapElement *TSMapMoveToMemoryContext(TSMapElement *config, MemoryContext context);
+/* Free all nodes of the rule list */
+extern void TSMapElementFree(TSMapElement *element);
+
+/* Print map in human-readable format */
+extern void TSMapPrintElement(TSMapElement *config, StringInfo result);
+
+/* Print dictionary name for a given Oid */
+extern void TSMapPrintDictName(Oid dictId, StringInfo result);
+
+/* Return all dictionaries used in config */
+extern Oid *TSMapGetDictionaries(TSMapElement *config);
+
+/* Do a deep comparison of two TSMapElements. Doesn't check parents of elements */
+extern bool TSMapElementEquals(TSMapElement *a, TSMapElement *b);
+
+#endif							/* _PG_TS_CONFIGMAP_H_ */
diff --git a/src/include/tsearch/ts_public.h b/src/include/tsearch/ts_public.h
index 0b7a5aa68e..d970eec0ab 100644
--- a/src/include/tsearch/ts_public.h
+++ b/src/include/tsearch/ts_public.h
@@ -115,6 +115,7 @@ typedef struct
 #define TSL_ADDPOS		0x01
 #define TSL_PREFIX		0x02
 #define TSL_FILTER		0x04
+#define TSL_MULTI		0x08
 
 /*
  * Struct for supporting complex dictionaries like thesaurus.
diff --git a/src/test/regress/expected/oidjoins.out b/src/test/regress/expected/oidjoins.out
index ef268d348e..a398e247c0 100644
--- a/src/test/regress/expected/oidjoins.out
+++ b/src/test/regress/expected/oidjoins.out
@@ -1097,14 +1097,6 @@ WHERE	mapcfg != 0 AND
 ------+--------
 (0 rows)
 
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
- ctid | mapdict 
-------+---------
-(0 rows)
-
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/expected/tsdicts.out b/src/test/regress/expected/tsdicts.out
index 2524ec2768..cfc7579aee 100644
--- a/src/test/regress/expected/tsdicts.out
+++ b/src/test/regress/expected/tsdicts.out
@@ -450,6 +450,105 @@ SELECT ts_lexize('thesaurus', 'one');
  {1}
 (1 row)
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+SELECT to_tsvector('english_union', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'books');
+    to_tsvector     
+--------------------
+ 'book':1 'books':1
+(1 row)
+
+SELECT to_tsvector('english_union', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+SELECT to_tsvector('english_intersect', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'books');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_intersect', 'booking');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+SELECT to_tsvector('english_except', 'book');
+ to_tsvector 
+-------------
+ 
+(1 row)
+
+SELECT to_tsvector('english_except', 'books');
+ to_tsvector 
+-------------
+ 'books':1
+(1 row)
+
+SELECT to_tsvector('english_except', 'booking');
+ to_tsvector 
+-------------
+ 'booking':1
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+SELECT to_tsvector('english_branches', 'book');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'books');
+ to_tsvector 
+-------------
+ 'book':1
+(1 row)
+
+SELECT to_tsvector('english_branches', 'booking');
+     to_tsvector      
+----------------------
+ 'book':1 'booking':1
+(1 row)
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -610,6 +709,163 @@ SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a
  'card':3,10 'invit':2,9 'like':6 'look':5 'order':1,8
 (1 row)
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+              to_tsvector              
+---------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                     to_tsvector                     
+-----------------------------------------------------
+ '1987a':6 'mysteri':2 'ring':3 'sn':5 'supernova':5
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+                              to_tsvector                               
+------------------------------------------------------------------------
+ '1987a':6 'mysterious':2 'of':4 'rings':3 'sn':5 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+            Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |                     Dictionaries                      
+-----------------+-------------------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN MATCH THEN simple UNION thesaurus+
+                 | ELSE simple                                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+      to_tsvector       
+------------------------
+ '12':1 'one':1 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+            to_tsvector            
+-----------------------------------
+ '123':1 'one':1 'three':3 'two':2
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+           to_tsvector           
+---------------------------------
+ '12':1 'four':3 'one':1 'two':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+      Text search configuration "public.thesaurus_tst"
+Parser: "pg_catalog.default"
+      Token      |               Dictionaries               
+-----------------+------------------------------------------
+ asciihword      | synonym, thesaurus, english_stem
+ asciiword       | CASE thesaurus WHEN NO MATCH THEN simple+
+                 | ELSE thesaurus                          +
+                 | END
+ email           | simple
+ file            | simple
+ float           | simple
+ host            | simple
+ hword           | english_stem
+ hword_asciipart | synonym, thesaurus, english_stem
+ hword_numpart   | simple
+ hword_part      | english_stem
+ int             | simple
+ numhword        | simple
+ numword         | simple
+ sfloat          | simple
+ uint            | simple
+ url             | simple
+ url_path        | simple
+ version         | simple
+ word            | english_stem
+
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector    
+------------------
+ '12':1 'books':2
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+ to_tsvector 
+-------------
+ '12':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+ to_tsvector 
+-------------
+ '123':1
+(1 row)
+
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+   to_tsvector   
+-----------------
+ '12':1 'book':2
+(1 row)
+
+CREATE TEXT SEARCH CONFIGURATION operators_tst (
+						COPY=thesaurus_tst
+);
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A');
+                                     to_tsvector                                      
+--------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION (synonym, simple);
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A Postgres');
+                                                to_tsvector                                                
+-----------------------------------------------------------------------------------------------------------
+ '1987a':6 'mysteri':2 'mysterious':2 'of':4 'pgsql':7 'postgr':7 'ring':3 'rings':3 'supernova':5 'the':1
+(1 row)
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/expected/tsearch.out b/src/test/regress/expected/tsearch.out
index b088ff0d4f..9ebf5b9b26 100644
--- a/src/test/regress/expected/tsearch.out
+++ b/src/test/regress/expected/tsearch.out
@@ -36,11 +36,11 @@ WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 -----+---------
 (0 rows)
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
- mapcfg | maptokentype | mapseqno 
---------+--------------+----------
+WHERE mapcfg = 0;
+ mapcfg | maptokentype 
+--------+--------------
 (0 rows)
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
@@ -51,8 +51,8 @@ RIGHT JOIN pg_ts_config_map AS m
     ON (tt.cfgid=m.mapcfg AND tt.tokid=m.maptokentype)
 WHERE
     tt.cfgid IS NULL OR tt.tokid IS NULL;
- cfgid | tokid | mapcfg | maptokentype | mapseqno | mapdict 
--------+-------+--------+--------------+----------+---------
+ cfgid | tokid | mapcfg | maptokentype | mapdicts 
+-------+-------+--------+--------------+----------
 (0 rows)
 
 -- test basic text search behavior without indexes, then with
@@ -567,55 +567,55 @@ SELECT length(to_tsvector('english', '345 qwe@efd.r '' http://www.com/ http://ae
 
 -- ts_debug
 SELECT * from ts_debug('english', '<myns:foo-bar_baz.blurfl>abc&nm1;def&#xa9;ghi&#245;jkl</myns:foo-bar_baz.blurfl>');
-   alias   |   description   |           token            |  dictionaries  |  dictionary  | lexemes 
------------+-----------------+----------------------------+----------------+--------------+---------
- tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |              | 
- asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem | {abc}
- entity    | XML entity      | &nm1;                      | {}             |              | 
- asciiword | Word, all ASCII | def                        | {english_stem} | english_stem | {def}
- entity    | XML entity      | &#xa9;                     | {}             |              | 
- asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem | {ghi}
- entity    | XML entity      | &#245;                     | {}             |              | 
- asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem | {jkl}
- tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |              | 
+   alias   |   description   |           token            |  dictionaries  | configuration |   command    | lexemes 
+-----------+-----------------+----------------------------+----------------+---------------+--------------+---------
+ tag       | XML tag         | <myns:foo-bar_baz.blurfl>  | {}             |               |              | 
+ asciiword | Word, all ASCII | abc                        | {english_stem} | english_stem  | english_stem | {abc}
+ entity    | XML entity      | &nm1;                      | {}             |               |              | 
+ asciiword | Word, all ASCII | def                        | {english_stem} | english_stem  | english_stem | {def}
+ entity    | XML entity      | &#xa9;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | ghi                        | {english_stem} | english_stem  | english_stem | {ghi}
+ entity    | XML entity      | &#245;                     | {}             |               |              | 
+ asciiword | Word, all ASCII | jkl                        | {english_stem} | english_stem  | english_stem | {jkl}
+ tag       | XML tag         | </myns:foo-bar_baz.blurfl> | {}             |               |              | 
 (9 rows)
 
 -- check parsing of URLs
 SELECT * from ts_debug('english', 'http://www.harewoodsolutions.co.uk/press.aspx</span>');
-  alias   |  description  |                 token                  | dictionaries | dictionary |                 lexemes                  
-----------+---------------+----------------------------------------+--------------+------------+------------------------------------------
- protocol | Protocol head | http://                                | {}           |            | 
- url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple     | {www.harewoodsolutions.co.uk/press.aspx}
- host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple     | {www.harewoodsolutions.co.uk}
- url_path | URL path      | /press.aspx                            | {simple}     | simple     | {/press.aspx}
- tag      | XML tag       | </span>                                | {}           |            | 
+  alias   |  description  |                 token                  | dictionaries | configuration | command |                 lexemes                  
+----------+---------------+----------------------------------------+--------------+---------------+---------+------------------------------------------
+ protocol | Protocol head | http://                                | {}           |               |         | 
+ url      | URL           | www.harewoodsolutions.co.uk/press.aspx | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk/press.aspx}
+ host     | Host          | www.harewoodsolutions.co.uk            | {simple}     | simple        | simple  | {www.harewoodsolutions.co.uk}
+ url_path | URL path      | /press.aspx                            | {simple}     | simple        | simple  | {/press.aspx}
+ tag      | XML tag       | </span>                                | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw<span>');
-  alias   |  description  |           token            | dictionaries | dictionary |           lexemes            
-----------+---------------+----------------------------+--------------+------------+------------------------------
- protocol | Protocol head | http://                    | {}           |            | 
- url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple     | {aew.wer0c.ewr/id?ad=qwe&dw}
- host     | Host          | aew.wer0c.ewr              | {simple}     | simple     | {aew.wer0c.ewr}
- url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple     | {/id?ad=qwe&dw}
- tag      | XML tag       | <span>                     | {}           |            | 
+  alias   |  description  |           token            | dictionaries | configuration | command |           lexemes            
+----------+---------------+----------------------------+--------------+---------------+---------+------------------------------
+ protocol | Protocol head | http://                    | {}           |               |         | 
+ url      | URL           | aew.wer0c.ewr/id?ad=qwe&dw | {simple}     | simple        | simple  | {aew.wer0c.ewr/id?ad=qwe&dw}
+ host     | Host          | aew.wer0c.ewr              | {simple}     | simple        | simple  | {aew.wer0c.ewr}
+ url_path | URL path      | /id?ad=qwe&dw              | {simple}     | simple        | simple  | {/id?ad=qwe&dw}
+ tag      | XML tag       | <span>                     | {}           |               |         | 
 (5 rows)
 
 SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?');
-  alias   |  description  |        token         | dictionaries | dictionary |        lexemes         
-----------+---------------+----------------------+--------------+------------+------------------------
- protocol | Protocol head | http://              | {}           |            | 
- url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple     | {5aew.werc.ewr:8100/?}
- host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path      | /?                   | {simple}     | simple     | {/?}
+  alias   |  description  |        token         | dictionaries | configuration | command |        lexemes         
+----------+---------------+----------------------+--------------+---------------+---------+------------------------
+ protocol | Protocol head | http://              | {}           |               |         | 
+ url      | URL           | 5aew.werc.ewr:8100/? | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?}
+ host     | Host          | 5aew.werc.ewr:8100   | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path      | /?                   | {simple}     | simple        | simple  | {/?}
 (4 rows)
 
 SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx');
-  alias   | description |         token          | dictionaries | dictionary |         lexemes          
-----------+-------------+------------------------+--------------+------------+--------------------------
- url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple     | {5aew.werc.ewr:8100/?xx}
- host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple     | {5aew.werc.ewr:8100}
- url_path | URL path    | /?xx                   | {simple}     | simple     | {/?xx}
+  alias   | description |         token          | dictionaries | configuration | command |         lexemes          
+----------+-------------+------------------------+--------------+---------------+---------+--------------------------
+ url      | URL         | 5aew.werc.ewr:8100/?xx | {simple}     | simple        | simple  | {5aew.werc.ewr:8100/?xx}
+ host     | Host        | 5aew.werc.ewr:8100     | {simple}     | simple        | simple  | {5aew.werc.ewr:8100}
+ url_path | URL path    | /?xx                   | {simple}     | simple        | simple  | {/?xx}
 (3 rows)
 
 SELECT token, alias,
diff --git a/src/test/regress/sql/oidjoins.sql b/src/test/regress/sql/oidjoins.sql
index c8291d3973..14bea4c758 100644
--- a/src/test/regress/sql/oidjoins.sql
+++ b/src/test/regress/sql/oidjoins.sql
@@ -549,10 +549,6 @@ SELECT	ctid, mapcfg
 FROM	pg_catalog.pg_ts_config_map fk
 WHERE	mapcfg != 0 AND
 	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_config pk WHERE pk.oid = fk.mapcfg);
-SELECT	ctid, mapdict
-FROM	pg_catalog.pg_ts_config_map fk
-WHERE	mapdict != 0 AND
-	NOT EXISTS(SELECT 1 FROM pg_catalog.pg_ts_dict pk WHERE pk.oid = fk.mapdict);
 SELECT	ctid, dictnamespace
 FROM	pg_catalog.pg_ts_dict fk
 WHERE	dictnamespace != 0 AND
diff --git a/src/test/regress/sql/tsdicts.sql b/src/test/regress/sql/tsdicts.sql
index 60906f6549..43203afe61 100644
--- a/src/test/regress/sql/tsdicts.sql
+++ b/src/test/regress/sql/tsdicts.sql
@@ -122,6 +122,57 @@ CREATE TEXT SEARCH DICTIONARY thesaurus (
 
 SELECT ts_lexize('thesaurus', 'one');
 
+-- test dictionary pipeline in configuration
+CREATE TEXT SEARCH CONFIGURATION english_union(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_union ALTER MAPPING FOR
+	asciiword
+	WITH english_stem UNION simple;
+
+SELECT to_tsvector('english_union', 'book');
+SELECT to_tsvector('english_union', 'books');
+SELECT to_tsvector('english_union', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_intersect(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_intersect ALTER MAPPING FOR
+	asciiword
+	WITH english_stem INTERSECT simple;
+
+SELECT to_tsvector('english_intersect', 'book');
+SELECT to_tsvector('english_intersect', 'books');
+SELECT to_tsvector('english_intersect', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_except(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_except ALTER MAPPING FOR
+	asciiword
+	WITH simple EXCEPT english_stem;
+
+SELECT to_tsvector('english_except', 'book');
+SELECT to_tsvector('english_except', 'books');
+SELECT to_tsvector('english_except', 'booking');
+
+CREATE TEXT SEARCH CONFIGURATION english_branches(
+						COPY=english
+);
+
+ALTER TEXT SEARCH CONFIGURATION english_branches ALTER MAPPING FOR
+	asciiword
+	WITH CASE ispell WHEN MATCH THEN KEEP
+		ELSE english_stem
+	END;
+
+SELECT to_tsvector('english_branches', 'book');
+SELECT to_tsvector('english_branches', 'books');
+SELECT to_tsvector('english_branches', 'booking');
+
 -- Test ispell dictionary in configuration
 CREATE TEXT SEARCH CONFIGURATION ispell_tst (
 						COPY=english
@@ -194,6 +245,50 @@ SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one');
 SELECT to_tsvector('thesaurus_tst', 'Supernovae star is very new star and usually called supernovae (abbreviation SN)');
 SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a tickets');
 
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN KEEP ELSE english_stem
+END;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH thesaurus UNION english_stem;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH simple UNION thesaurus;
+SELECT to_tsvector('thesaurus_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN MATCH THEN simple UNION thesaurus
+	ELSE simple
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two four');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR asciiword WITH CASE
+	thesaurus WHEN NO MATCH THEN simple ELSE thesaurus
+END;
+\dF+ thesaurus_tst
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
+ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING
+	REPLACE simple WITH english_stem;
+SELECT to_tsvector('thesaurus_tst', 'one two');
+SELECT to_tsvector('thesaurus_tst', 'one two three');
+SELECT to_tsvector('thesaurus_tst', 'one two books');
+
+CREATE TEXT SEARCH CONFIGURATION operators_tst (
+						COPY=thesaurus_tst
+);
+
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION simple;
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A');
+
+ALTER TEXT SEARCH CONFIGURATION operators_tst ALTER MAPPING FOR asciiword WITH english_stem UNION (synonym, simple);
+SELECT to_tsvector('operators_tst', 'The Mysterious Rings of Supernova 1987A Postgres');
+
 -- invalid: non-lowercase quoted identifiers
 CREATE TEXT SEARCH DICTIONARY tsdict_case
 (
diff --git a/src/test/regress/sql/tsearch.sql b/src/test/regress/sql/tsearch.sql
index 637bfb3012..26d771b2b5 100644
--- a/src/test/regress/sql/tsearch.sql
+++ b/src/test/regress/sql/tsearch.sql
@@ -26,9 +26,9 @@ SELECT oid, cfgname
 FROM pg_ts_config
 WHERE cfgnamespace = 0 OR cfgowner = 0 OR cfgparser = 0;
 
-SELECT mapcfg, maptokentype, mapseqno
+SELECT mapcfg, maptokentype
 FROM pg_ts_config_map
-WHERE mapcfg = 0 OR mapdict = 0;
+WHERE mapcfg = 0;
 
 -- Look for pg_ts_config_map entries that aren't one of parser's token types
 SELECT * FROM
#30Alexander Korotkov
a.korotkov@postgrespro.ru
In reply to: Aleksandr Parfenov (#25)
Re: Flexible configuration for full-text search

On Fri, Apr 6, 2018 at 10:52 AM Aleksandr Parfenov
<a.parfenov@postgrespro.ru> wrote:

On Thu, 5 Apr 2018 17:26:10 +0300
Teodor Sigaev <teodor@sigaev.ru> wrote:

4) Initial approach suggested to distinguish three state of
dictionary result: null (unknown word), stopword and usual word. Now
only two, we lost possibility to catch stopwords. One of way to use
stopwrods is: let we have to identical fts configurations, except one
skips stopwords and another doesn't. Second configuration is used for
indexing, and first one for search by default. But if we can't find
anything ('to be or to be' - phrase contains stopwords only) then we
can use second configuration. For now, we need to keep two variant of
each dictionary - with and without stopwords. But if it's possible to
distinguish stop and nonstop words in configuration then we don't
need to have duplicated dictionaries.

With the proposed way to configure it is possible to create a special
dictionary only for stopword checking and use it at decision-making
time.

For example, we can create dictionary english_stopword which will
return word itself in case of stopword and NULL otherwise. With such
dictionary we create a configuration:

ALTER TEXT SEARCH CONFIGURATION test_cfg ALTER MAPPING FOR asciiword,
word WITH
CASE english_stopword WHEN NO MATCH THEN english_hunspell END;

In described example, english_hunspell can be implemented without
processing of stopwords at all and we can divide stopword processing
and processing of other words into separate dictionaries.

The key point of the patch is to process stopwords the same way as
others at the level of the PostgreSQL internals and give users an
instrument to process them in a special way via configurations.

If we're going to do it that way by providing separate dictionaries
for stop words, then I think we should also make it for builtin
dictionaries and configurations. So, I think this patch should also
split builtin dictionaries into stemmers and stop word dictionaries,
and provide corresponding configuration over them. It would be also
needed to perform some benchmarking to show that new way of defining
configurations is not worse than previous way in the performance.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#31Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alexander Korotkov (#30)
Re: Flexible configuration for full-text search

Alexander Korotkov <a.korotkov@postgrespro.ru> writes:

On Fri, Apr 6, 2018 at 10:52 AM Aleksandr Parfenov
<a.parfenov@postgrespro.ru> wrote:

The key point of the patch is to process stopwords the same way as
others at the level of the PostgreSQL internals and give users an
instrument to process them in a special way via configurations.

If we're going to do it that way by providing separate dictionaries
for stop words, then I think we should also make it for builtin
dictionaries and configurations. So, I think this patch should also
split builtin dictionaries into stemmers and stop word dictionaries,
and provide corresponding configuration over them. It would be also
needed to perform some benchmarking to show that new way of defining
configurations is not worse than previous way in the performance.

I'm hesitant about the backwards-compatibility aspects of this.
Yes, we could set up the standard text search configurations to still
work the same as before, but how will you do it without breaking existing
custom configurations that use those dictionaries?

regards, tom lane

#32Alexander Korotkov
a.korotkov@postgrespro.ru
In reply to: Tom Lane (#31)
Re: Flexible configuration for full-text search

On Fri, Aug 24, 2018 at 1:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Alexander Korotkov <a.korotkov@postgrespro.ru> writes:

On Fri, Apr 6, 2018 at 10:52 AM Aleksandr Parfenov
<a.parfenov@postgrespro.ru> wrote:

The key point of the patch is to process stopwords the same way as
others at the level of the PostgreSQL internals and give users an
instrument to process them in a special way via configurations.

If we're going to do it that way by providing separate dictionaries
for stop words, then I think we should also make it for builtin
dictionaries and configurations. So, I think this patch should also
split builtin dictionaries into stemmers and stop word dictionaries,
and provide corresponding configuration over them. It would be also
needed to perform some benchmarking to show that new way of defining
configurations is not worse than previous way in the performance.

I'm hesitant about the backwards-compatibility aspects of this.
Yes, we could set up the standard text search configurations to still
work the same as before, but how will you do it without breaking existing
custom configurations that use those dictionaries?

Agreed, backward compatibility is important here. Probably we should
leave old dictionaries for that. But I just meant that if we
introduce new (better) way of stop words handling and encourage users
to use it, then it would look strange if default configurations work
the old way...

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#33Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Alexander Korotkov (#32)
Re: Flexible configuration for full-text search

On Fri, 24 Aug 2018 18:50:38 +0300
Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:

Agreed, backward compatibility is important here. Probably we should
leave old dictionaries for that. But I just meant that if we
introduce new (better) way of stop words handling and encourage users
to use it, then it would look strange if default configurations work
the old way...

I agree with Alexander. The only drawback I see is that after addition
of new dictionaries, there will be 3 dictionaries for each language: old
one, stop-word filter for the language, and stemmer dictionary.

Also, the new approach will solve ambiguity in case of 'simple'
dictionary. Currently, it filters stop-words for the language, which
was selected during DB initialization.

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

#34Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Aleksandr Parfenov (#33)
Re: Flexible configuration for full-text search

On Tue, 28 Aug 2018 12:40:32 +0700
Aleksandr Parfenov <a.parfenov@postgrespro.ru> wrote:

On Fri, 24 Aug 2018 18:50:38 +0300
Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:

Agreed, backward compatibility is important here. Probably we should
leave old dictionaries for that. But I just meant that if we
introduce new (better) way of stop words handling and encourage users
to use it, then it would look strange if default configurations work
the old way...

I agree with Alexander. The only drawback I see is that after addition
of new dictionaries, there will be 3 dictionaries for each language:
old one, stop-word filter for the language, and stemmer dictionary.

During work on the new version of the patch, I found an issue in
proposed syntax. At the beginning of the conversation, there was a
suggestion to split stop word filtering and words normalization. At this
stage of development, we can use a different dictionary for stop word
detection, but if we drop the word, the word counter wouldn't increase
and the stop word will be processed as an unknown word.

Currently, I see two solutions:

1) Keep the old way of stop word filtering. The drawback of this
approach is the mixing of word normalization and stop word detection
logic inside of a dictionary. It can be solved by the usage of 'simple'
dictionary in accept=false mode as a stop word filter.

2) Add an action STOPWORD to KEEP and DROP (which is not implemented in
previous patch, but I think it is good to have both of them) in the
meaning of "increase word counter but don't add lexeme to vector".

Any suggestions on the issue?

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

#35Aleksandr Parfenov
a.parfenov@postgrespro.ru
In reply to: Aleksandr Parfenov (#34)
Re: Flexible configuration for full-text search

Hello hackers!

As I wrote few weeks ago, there is a issue with stopwords processing in
proposed syntax for full-text configurations. I want to separate word
normalization and stopwords detection to two separate dictionaries. The
problem is how to configure stopword detection dictionary.

The cause of the problem is counting stopwords, but not using any
lexemes for them. However, do we have to count stopwords during words
counting or can we ignore them like unknown words? The problem I see is
backward compatibility, since we have to regenerate all queries and
vectors. But is it real problem or we can change its behavior in this
way?

--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

#36Tom Lane
tgl@sss.pgh.pa.us
In reply to: Aleksandr Parfenov (#35)
Re: Flexible configuration for full-text search

Aleksandr Parfenov <a.parfenov@postgrespro.ru> writes:

As I wrote few weeks ago, there is a issue with stopwords processing in
proposed syntax for full-text configurations. I want to separate word
normalization and stopwords detection to two separate dictionaries. The
problem is how to configure stopword detection dictionary.

The cause of the problem is counting stopwords, but not using any
lexemes for them. However, do we have to count stopwords during words
counting or can we ignore them like unknown words? The problem I see is
backward compatibility, since we have to regenerate all queries and
vectors. But is it real problem or we can change its behavior in this
way?

I think there should be a pretty high bar for forcing people to regenerate
all that data when they haven't made any change of their own choice.

Also, I'm not very clear on exactly what you're proposing here, but it
sounds like it'd have the effect of changing whether stopwords count in
phrase distances ('a <N> b'). I think that's right out --- whether or not
you feel the current distance behavior is ideal, asking people to *both*
rebuild all their derived data *and* change their applications will cause
a revolt. It's not sufficiently obviously broken that we can change it.

regards, tom lane

#37Dmitry Dolgov
9erthalion6@gmail.com
In reply to: Tom Lane (#36)
Re: Flexible configuration for full-text search

On Wed, Aug 29, 2018 at 10:38 AM Aleksandr Parfenov <a.parfenov@postgrespro.ru> wrote:

On Tue, 28 Aug 2018 12:40:32 +0700
Aleksandr Parfenov <a.parfenov@postgrespro.ru> wrote:

On Fri, 24 Aug 2018 18:50:38 +0300
Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:

Agreed, backward compatibility is important here. Probably we should
leave old dictionaries for that. But I just meant that if we
introduce new (better) way of stop words handling and encourage users
to use it, then it would look strange if default configurations work
the old way...

I agree with Alexander. The only drawback I see is that after addition
of new dictionaries, there will be 3 dictionaries for each language:
old one, stop-word filter for the language, and stemmer dictionary.

During work on the new version of the patch, I found an issue in
proposed syntax. At the beginning of the conversation, there was a
suggestion to split stop word filtering and words normalization. At this
stage of development, we can use a different dictionary for stop word
detection, but if we drop the word, the word counter wouldn't increase
and the stop word will be processed as an unknown word.

Maybe it would be better if you or some of your colleagues (Alexander, Arthur?)
will post this new version, because the current one has some conflicts - so it
would be easier for a reviewers. For now I'll move it to the next CF.

#38Michael Paquier
michael@paquier.xyz
In reply to: Dmitry Dolgov (#37)
Re: Flexible configuration for full-text search

On Thu, Nov 29, 2018 at 02:02:16PM +0100, Dmitry Dolgov wrote:

Maybe it would be better if you or some of your colleagues (Alexander, Arthur?)
will post this new version, because the current one has some conflicts - so it
would be easier for a reviewers. For now I'll move it to the next CF.

No updates for some time now, marked as returned with feedback.
--
Michael