PATCH: recursive json_populate_record()

Started by Nikita Glukhovover 9 years ago27 messageshackers
Jump to latest
#1Nikita Glukhov
n.gluhov@postgrespro.ru

Hi.

The first attached patch implements recursive processing of nested
objects and arrays in json[b]_populate_record[set](),
json[b]_to_record[set](). See regression tests for examples.

It also fixes the following errors/inconsistencies caused by lost
quoting of string json values:

[master]=# select * from json_to_record('{"js": "a"}') as rec(js json);
ERROR: invalid input syntax for type json
DETAIL: Token "a" is invalid.
CONTEXT: JSON data, line 1: a

[master]=# select * from json_to_record('{"js": "true"}') as rec(js json);
js
------
true

[patched]=# select * from json_to_record('{"js": "a"}') as rec(js json);
js
-----
"a"

[patched]=# select * from json_to_record('{"js": "true"}') as rec(js json);
js
--------
"true"

The second patch adds assignment of correct ndims to array fields of
RECORD function result types.
Without this patch, attndims in tuple descriptors of RECORD types is
always 0 and the corresponding assertion fails in the next test:

[patched]=# select json_to_record('{"a": [1, 2, 3]}') as rec(a int[]);

Should I submit these patches to commitfest?

--
Nikita Glukhov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

0001_recursive_json_populate_record_v01.patchtext/x-patch; name=0001_recursive_json_populate_record_v01.patchDownload+1751-458
0002_assign_ndims_to_record_function_result_types_v01.patchtext/x-patch; name=0002_assign_ndims_to_record_function_result_types_v01.patchDownload+28-7
#2Michael Paquier
michael@paquier.xyz
In reply to: Nikita Glukhov (#1)
Re: PATCH: recursive json_populate_record()

On Tue, Dec 13, 2016 at 9:38 AM, Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:

It also fixes the following errors/inconsistencies caused by lost quoting of
string json values:

[master]=# select * from json_to_record('{"js": "a"}') as rec(js json);
ERROR: invalid input syntax for type json
DETAIL: Token "a" is invalid.
CONTEXT: JSON data, line 1: a

The docs mention that this is based on a best effort now. It may be
interesting to improve that.

[master]=# select * from json_to_record('{"js": "true"}') as rec(js json);
js
------
true

[patched]=# select * from json_to_record('{"js": "a"}') as rec(js json);
js
-----
"a"

That's indeed valid JSON.

[patched]=# select * from json_to_record('{"js": "true"}') as rec(js json);
js
--------
"true"

And so is that.

The second patch adds assignment of correct ndims to array fields of RECORD
function result types.
Without this patch, attndims in tuple descriptors of RECORD types is always
0 and the corresponding assertion fails in the next test:

[patched]=# select json_to_record('{"a": [1, 2, 3]}') as rec(a int[]);

Should I submit these patches to commitfest?

It seems to me that it would be a good idea to add them.
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Aleksander Alekseev
aleksander@timescale.com
In reply to: Michael Paquier (#2)
Re: PATCH: recursive json_populate_record()

Hello.

I've noticed that this patch is on CF and needs a reviewer so I decided
to take a look. Code looks good to me in general, it's well covered by
tests and passes `make check-world`.

However it would be nice to have a little more comments. In my opinion
every procedure have to have at least a short description - what it
does, what arguments it receives and what it returns, even if it's a
static procedure. Same applies for structures and their fields.

Another thing that bothers me is a FIXME comment:

```
tupdesc = lookup_rowtype_tupdesc(type, typmod); /* FIXME cache */
```

I suggest remove it or implement a caching here as planned.

On Tue, Dec 13, 2016 at 10:07:46AM +0900, Michael Paquier wrote:

On Tue, Dec 13, 2016 at 9:38 AM, Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:

It also fixes the following errors/inconsistencies caused by lost quoting of
string json values:

[master]=# select * from json_to_record('{"js": "a"}') as rec(js json);
ERROR: invalid input syntax for type json
DETAIL: Token "a" is invalid.
CONTEXT: JSON data, line 1: a

The docs mention that this is based on a best effort now. It may be
interesting to improve that.

[master]=# select * from json_to_record('{"js": "true"}') as rec(js json);
js
------
true

[patched]=# select * from json_to_record('{"js": "a"}') as rec(js json);
js
-----
"a"

That's indeed valid JSON.

[patched]=# select * from json_to_record('{"js": "true"}') as rec(js json);
js
--------
"true"

And so is that.

The second patch adds assignment of correct ndims to array fields of RECORD
function result types.
Without this patch, attndims in tuple descriptors of RECORD types is always
0 and the corresponding assertion fails in the next test:

[patched]=# select json_to_record('{"a": [1, 2, 3]}') as rec(a int[]);

Should I submit these patches to commitfest?

It seems to me that it would be a good idea to add them.
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

--
Best regards,
Aleksander Alekseev

#4Nikita Glukhov
n.gluhov@postgrespro.ru
In reply to: Aleksander Alekseev (#3)
Re: PATCH: recursive json_populate_record()

I've noticed that this patch is on CF and needs a reviewer so I decided
to take a look. Code looks good to me in general, it's well covered by
tests and passes `make check-world`.

Thanks for your review.

However it would be nice to have a little more comments. In my opinion
every procedure have to have at least a short description - what it
does, what arguments it receives and what it returns, even if it's a
static procedure. Same applies for structures and their fields.

I have added some comments for functions and structures in the second
version of this patch.

Another thing that bothers me is a FIXME comment:

```
tupdesc = lookup_rowtype_tupdesc(type, typmod); /* FIXME cache */
```

I suggest remove it or implement a caching here as planned.

I have implemented tuple descriptor caching here in populate_composite()
and also in populate_record_worker() (by using populate_composite()
instead of populate_record()). These improvements can speed up bulk
jsonb conversion by 15-20% in the simple test with two nested records.

--
Nikita Glukhov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

0001_recursive_json_populate_record_v02.patchtext/x-patch; name=0001_recursive_json_populate_record_v02.patchDownload+1870-460
0002_assign_ndims_to_record_function_result_types_v02.patchtext/x-patch; name=0002_assign_ndims_to_record_function_result_types_v02.patchDownload+28-7
#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Nikita Glukhov (#4)
Re: PATCH: recursive json_populate_record()

Nikita Glukhov <n.gluhov@postgrespro.ru> writes:

[ 0001_recursive_json_populate_record_v02.patch ]
[ 0002_assign_ndims_to_record_function_result_types_v02.patch ]

I do not see the point of the second one of these, and it adds no test
case showing why it would be needed. The example you quoted at the
start of the thread doesn't fail for me in HEAD, so I surmise that
it's falling foul of some assertion you added in the 0001 patch, but
if so I think that assertion is wrong. attndims is really syntactic
sugar only and doesn't affect anything meaningful semantically. Here
is an example showing why it shouldn't:

regression=# create table foo (d0 _int4, d1 int[], d2 int[3][4]);
CREATE TABLE
regression=# select attname,atttypid,attndims from pg_attribute where attrelid = 'foo'::regclass and attnum > 0;
attname | atttypid | attndims
---------+----------+----------
d0 | 1007 | 0
d1 | 1007 | 1
d2 | 1007 | 2
(3 rows)

Columns d0,d1,d2 are really all of the same type, and any code that
treats d0 and d1 differently is certainly broken.

So there should be no need to track a particular attndims for an output
column of a function result, and in most cases it's not clear to me where
you would get an attndims value from anyway.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Nikita Glukhov
n.gluhov@postgrespro.ru
In reply to: Tom Lane (#5)
Re: PATCH: recursive json_populate_record()

On 01/08/2017 09:52 PM, Tom Lane wrote:

The example you quoted at the start of the thread doesn't fail for me
in HEAD, so I surmise that it's falling foul of some assertion you added
in the 0001 patch, but if so I think that assertion is wrong. attndims
is really syntactic sugar only and doesn't affect anything meaningful
semantically. Here is an example showing why it shouldn't:

regression=# create table foo (d0 _int4, d1 int[], d2 int[3][4]);
CREATE TABLE
regression=# select attname,atttypid,attndims from pg_attribute where attrelid = 'foo'::regclass and attnum > 0;
attname | atttypid | attndims
---------+----------+----------
d0 | 1007 | 0
d1 | 1007 | 1
d2 | 1007 | 2
(3 rows)

Columns d0,d1,d2 are really all of the same type, and any code that
treats d0 and d1 differently is certainly broken.

Thank you for this example with raw _int4 type. I didn't expect that
attndims can legally be zero. There was really wrong assertion "ndims > 0"
that was necessary because I used attndims for verification of a
number of dimensions of a populated json array.

I have fixed the first patch: when the number of dimensionsis unknown
we determine it simply by the number of successive opening brackets at
the start of a json array. But I'm still using for verification non-zero
ndims values that can be get from composite type attribute (attndims) or
from domain array type (typndims) through specially added function
get_type_ndims().

On 01/08/2017 09:52 PM, Tom Lane wrote:

I do not see the point of the second one of these, and it adds no test
case showing why it would be needed.

I also have added special test cases for json_to_record() showing difference
in behavior that the second patch brings in (see changes in json.out and
jsonb.out).

--
Nikita Glukhov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

0001_recursive_json_populate_record_v03.patchtext/x-patch; name=0001_recursive_json_populate_record_v03.patchDownload+2143-490
0002_assign_ndims_to_record_function_result_types_v03.patchtext/x-patch; name=0002_assign_ndims_to_record_function_result_types_v03.patchDownload+32-23
#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Nikita Glukhov (#6)
Re: PATCH: recursive json_populate_record()

Nikita Glukhov <n.gluhov@postgrespro.ru> writes:

On 01/08/2017 09:52 PM, Tom Lane wrote:

... attndims is really syntactic sugar only and doesn't affect anything
meaningful semantically.

I have fixed the first patch: when the number of dimensionsis unknown
we determine it simply by the number of successive opening brackets at
the start of a json array. But I'm still using for verification non-zero
ndims values that can be get from composite type attribute (attndims) or
from domain array type (typndims) through specially added function
get_type_ndims().

Apparently I didn't make my point forcefully enough. attndims is not
semantically significant in Postgres; the fact that we even store it
is just a historical artifact, I think. There might be an argument
to start enforcing it, but we'd have to do that across the board, and
I think most likely any such proposal would fail because of backwards
compatibility concerns. (There would also be a lot of discussion as
to whether, say, "int foo[3]" shouldn't enforce that the array length
is 3, not just that it be one-dimensional.) In the meantime, we are
*not* going to have attndims be semantically significant in just this one
function. That would not satisfy the principle of least astonishment.
Therefore, please remove this patch's dependence on attndims.

I looked through the patch briefly and have some other comments:

1. It's pretty bizarre that you need dummy forward struct declarations
for ColumnIOData and RecordIOData. The reason evidently is that somebody
ignored project style and put a bunch of typedefs after the local function
declarations in jsonfuncs.c. Project style of course is to put the
typedefs first, precisely because they may be needed in the local function
declarations. I will go move those declarations right now, as a single-
purpose patch, so that you have something a bit cleaner to modify.

2. The business with isstring, saved_scalar_is_string, etc makes me itch.
I'm having a hard time explaining exactly why, but it just seems like a
single-purpose kluge. Maybe it would seem less so if you saved the
original JsonTokenType instead. But also I'm wondering why the ultimate
consumers are concerned with string-ness as such. It seems like mainly
what they're trying to do is forbid converting a string into a non-scalar
Postgres type, but (a) why, and (b) if you are going to restrict it,
wouldn't it be more sensible to frame it as you can't convert any JSON
scalar to a non-scalar type? As to (a), it seems like you're just
introducing unnecessary compatibility breakage if you insist on that.
As to (b), really it seems like an appropriate restriction of that sort
would be like "if target type is composite, source must be a JSON object,
and if target type is array, source must be a JSON array". Personally
I think what you ought to do here is not change the semantics of cases
that are allowed today, but only change the semantics in the cases of
JSON object being assigned to PG composite and JSON array being assigned
to PG array; both of those fail today, so there's nobody depending on the
existing behavior. But if you reject string-to-composite cases then you
will be breaking existing apps, and I doubt people will thank you for it.

3. I think you'd be better off declaring ColumnIOData.type_category as an
actual enum type, not "char" with some specific values documented only
by a comment. Also, could you fold the domain case in as a fourth
type_category? Also, why does ColumnIOData have both an ndims field and
another one buried in io.array.ndims?

4. populate_array_report_expected_array() violates our error message
guidelines, first by using elog rather than ereport for a user-facing
error, and second by assembling the message from pieces, which would
make it untranslatable even if it were being reported through ereport.
I'm not sure if this needs to be fixed though; will the function even
still be needed once you remove the attndims dependency? Even if there
are still callers, you wouldn't necessarily need such a function if
you scale back your ambitions a bit as to the amount of detail required.
I'm not sure you really need to report what you think the dimensions
are when complaining that an array is nonrectangular.

5. The business with having some arguments that are only for json and
others that are only for jsonb, eg in populate_scalar(), also makes me
itch. I wonder whether this wouldn't all get a lot simpler and more
readable if we gave up on trying to share code between the two cases.
In populate_scalar(), for instance, the amount of code actually shared
between the two cases amounts to a whole two lines, which are dwarfed
by the amount of crud needed to deal with trying to serve both cases
in the one function. There are places where there's more sharable
code than that, but it seems like a better design might be to refactor
the sharable code out into subroutines called by separate functions for
json and jsonb.

6. I'm not too excited by your having invented JsonContainerSize,
JsonContainerIsObject, etc, and then using them in just this new
code. That is not really improving readability, it's just creating
stylistic inconsistency between these functions and the rest of the
jsonb code. If you want such macros I think it would be better to
submit a separate cosmetic patch that tries to hide such bit-tests
behind macros throughout the jsonb code. Another problem is that
these macros are coding hazards because they look like they yield bool
values but they don't really; assigning the results to bool variables,
for example, would fail on most platforms. Safer coding for a
general-purpose macro would be like

#define JsonContainerIsObject(jc) (((jc)->header & JB_FOBJECT) != 0)

7. More generally, the patch is hard to review because it whacks the
existing code around so thoroughly that it's tough even to match up
old and new code to get a handle on what you changed functionally.
Maybe it would be good if you could separate it into a refactoring
patch that just moves existing code around without functional changes,
and then a patch on top of that that adds code and makes only localized
changes in what's already there.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Nikita Glukhov
n.gluhov@postgrespro.ru
In reply to: Tom Lane (#7)
Re: PATCH: recursive json_populate_record()

On 22.01.2017 21:58, Tom Lane wrote:

In the meantime, we are *not* going to have attndims be semantically
significant in just this one function. Therefore, please remove this patch's
dependence on attndims.

Ok, I've removed patch's dependence on attndims. But I still believe that
someday PostgreSQL's type system will be fixed.

I looked through the patch briefly and have some other comments:

Thank you very much for your review.

1. It's pretty bizarre that you need dummy forward struct declarations
for ColumnIOData and RecordIOData. The reason evidently is that somebody
ignored project style and put a bunch of typedefs after the local function
declarations in jsonfuncs.c. Project style of course is to put the
typedefs first, precisely because they may be needed in the local function
declarations. I will go move those declarations right now, as a single-
purpose patch, so that you have something a bit cleaner to modify.

These forward struct declarations were moved to the type declaration section.

2. The business with isstring, saved_scalar_is_string, etc makes me itch.
I'm having a hard time explaining exactly why, but it just seems like a
single-purpose kluge. Maybe it would seem less so if you saved the
original JsonTokenType instead.

Original JsonTokenType is saved now. Also "isnull" fields of several structures
were replaced by it.

But also I'm wondering why the ultimate consumers are concerned with
string-ness as such.

saved_scalar_is_string was necessary for the case when json string is converted
to json/jsonb type. json lexer returns strings with stripped quotes and we
must recover them before passing to json_in() or jsonb_in(). There were
examples for this case in my first message:

[master]=# select * from json_to_record('{"js": "a"}') as rec(js json);
ERROR: invalid input syntax for type json
DETAIL: Token "a" is invalid.
CONTEXT: JSON data, line 1: a

[master]=# select * from json_to_record('{"js": "true"}') as rec(js json);
js
------
true

[patched]=# select * from json_to_record('{"js": "a"}') as rec(js json);
js
-----
"a"

[patched]=# select * from json_to_record('{"js": "true"}') as rec(js json);
js
--------
"true"

It seems like mainly what they're trying to do is forbid converting a string
into a non-scalar Postgres type, but (a) why, and (b) if you are going to
restrict it, wouldn't it be more sensible to frame it as you can't convert any
JSON scalar to a non-scalar type? As to (a), it seems like you're just
introducing unnecessary compatibility breakage if you insist on that.
As to (b), really it seems like an appropriate restriction of that sort
would be like "if target type is composite, source must be a JSON object,
and if target type is array, source must be a JSON array". Personally
I think what you ought to do here is not change the semantics of cases
that are allowed today, but only change the semantics in the cases of
JSON object being assigned to PG composite and JSON array being assigned
to PG array; both of those fail today, so there's nobody depending on the
existing behavior. But if you reject string-to-composite cases then you
will be breaking existing apps, and I doubt people will thank you for it.

I've removed compatibility-breaking restrictions and now string-to-non-scalar
conversions through the input function are allowed. Also I've added
corresponding regression test cases.

3. I think you'd be better off declaring ColumnIOData.type_category as an
actual enum type, not "char" with some specific values documented only
by a comment. Also, could you fold the domain case in as a fourth
type_category?

I've introduced enum type TypeCat for type categories with domain category as
its fourth member. (TypeCategory and JsonTypeCategory names conflict with
existing names, you might offer a better name.)

Also, why does ColumnIOData have both an ndims field and another one buried
in io.array.ndims?

Now there are no ndims fields at all, but earlier their values could differ:
ColumnIOData.ndims was typically copied from attndims, but ArrayIOData.ndims
could be copied from typndims for domain types.

4. populate_array_report_expected_array() violates our error message
guidelines, first by using elog rather than ereport for a user-facing
error, and second by assembling the message from pieces, which would
make it untranslatable even if it were being reported through ereport.
I'm not sure if this needs to be fixed though; will the function even
still be needed once you remove the attndims dependency? Even if there
are still callers, you wouldn't necessarily need such a function if
you scale back your ambitions a bit as to the amount of detail required.
I'm not sure you really need to report what you think the dimensions
are when complaining that an array is nonrectangular.

It was my mistake that I was not familiar message-error guidelines. Now
ereport() is used and there is no message assembling, but I'm also not sure
that we need to report these details.

5. The business with having some arguments that are only for json and
others that are only for jsonb, eg in populate_scalar(), also makes me
itch.

I've refactored json/jsonb passing using new struct JsValue which contains
union for json/jsonb values.

I wonder whether this wouldn't all get a lot simpler and more
readable if we gave up on trying to share code between the two cases.
In populate_scalar(), for instance, the amount of code actually shared
between the two cases amounts to a whole two lines, which are dwarfed
by the amount of crud needed to deal with trying to serve both cases
in the one function. There are places where there's more sharable
code than that, but it seems like a better design might be to refactor
the sharable code out into subroutines called by separate functions for
json and jsonb.

Maybe two separate families of functions like this

a_json(common_args, json_args)
{
b(common_args);
c_json(json_args);
d(common_args);
}

a_jsonb(common_args, jsonb_args)
{
b(common_args);
c_jsonb(jsonb_args);
d(common_args);
}

could slightly improve readability, but such code duplication (I believe it is
a duplicate code) would never be acceptable to me.

I can only offer to extract two subroutines from from populate_scalar():
populate_scalar_json() and populate_scalar_jsonb(). I think InputFunctionCall()
here should have exactly the one call site because semantically there is only
the one such call per scalar, regardless of its type.

6. I'm not too excited by your having invented JsonContainerSize,
JsonContainerIsObject, etc, and then using them in just this new
code. That is not really improving readability, it's just creating
stylistic inconsistency between these functions and the rest of the
jsonb code.

These new macros were introduced because existing JB_XXX() macros work with
Jsonb struct and there were no analogous macros for JsonbContainer struct.
They were not invented specifically for this patch: they are the result of the
deep refactoring of json/jsonb code which was made in the process of working on
jsonb compression (I'm going to present this work here soon). This refactoring
allows us to use a single generic interface to json, jsonb and jsonbc (new
compressed format) types using ExpandedObject representation, but direct access
to JsonbContainer fields becomes illegal. So I'am trying not to create new
direct references to JsonbContainer fields. Also I could offer to rename these
macros to JBC_XXX() or JB_CONTAINER_XXX() but it would lead to unnecessary
conflicts.

If you want such macros I think it would be better to submit a separate
cosmetic patch that tries to hide such bit-tests behind macros throughout
the jsonb code.

I've attached that patch, but not all the bit-tests were hidden: some of them
in jsonb_util.c still remain valid after upcoming refactoring because they
don't belong to generic code (there might be better to use JBC_XXX() macros).

Another problem is that these macros are coding hazards because they look like
they yield bool values but they don't really; assigning the results to bool
variables, for example, would fail on most platforms. Safer coding for a
general-purpose macro would be like

#define JsonContainerIsObject(jc) (((jc)->header & JB_FOBJECT) != 0)

Sorry for this obvious mistake. But macros JB_ROOT_IS_XXX() also contain the
same hazard.

7. More generally, the patch is hard to review because it whacks the
existing code around so thoroughly that it's tough even to match up
old and new code to get a handle on what you changed functionally.
Maybe it would be good if you could separate it into a refactoring
patch that just moves existing code around without functional changes,
and then a patch on top of that that adds code and makes only localized
changes in what's already there.

I've split this patch into two patches as you asked.

--
Nikita Glukhov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

0001-introduce-JsonContainerXxx-macros-v04.patchtext/x-patch; name=0001-introduce-JsonContainerXxx-macros-v04.patchDownload+11-5
0002-refactor-json_populate_record-v04.patchtext/x-patch; name=0002-refactor-json_populate_record-v04.patchDownload+467-455
0003-recursive-json_populate_record-v04.patchtext/x-patch; name=0003-recursive-json_populate_record-v04.patchDownload+1808-40
#9Tom Lane
tgl@sss.pgh.pa.us
In reply to: Nikita Glukhov (#8)
Re: PATCH: recursive json_populate_record()

Nikita Glukhov <n.gluhov@postgrespro.ru> writes:

On 22.01.2017 21:58, Tom Lane wrote:

If you want such macros I think it would be better to submit a separate
cosmetic patch that tries to hide such bit-tests behind macros throughout
the jsonb code.

I've attached that patch, but not all the bit-tests were hidden: some of them
in jsonb_util.c still remain valid after upcoming refactoring because they
don't belong to generic code (there might be better to use JBC_XXX() macros).

Pushed this; grepping found a couple other places that could be replaced
by the macros, so I did.

I didn't include the JsonContainerIsEmpty macro, though. It wasn't used
anywhere, and I'm not exactly convinced that "IsEmpty" is more readable
than "Size == 0", anyhow. We can add it later if the use-case gets
stronger.

Sorry for this obvious mistake. But macros JB_ROOT_IS_XXX() also contain the
same hazard.

Good point, fixed.

I'll look at the rest of this in a bit.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#10Tom Lane
tgl@sss.pgh.pa.us
In reply to: Nikita Glukhov (#8)
Re: PATCH: recursive json_populate_record()

Nikita Glukhov <n.gluhov@postgrespro.ru> writes:

On 22.01.2017 21:58, Tom Lane wrote:

5. The business with having some arguments that are only for json and
others that are only for jsonb, eg in populate_scalar(), also makes me
itch.

I've refactored json/jsonb passing using new struct JsValue which contains
union for json/jsonb values.

I'm not too enamored of that fix. It doesn't do much for readability, and
at least with my compiler (gcc 4.4.7), I sometimes get "variable might be
used uninitialized" warnings, probably due to not all fields of the union
getting set in every code path.

I wonder whether this wouldn't all get a lot simpler and more
readable if we gave up on trying to share code between the two cases.

Maybe two separate families of functions like this
...
could slightly improve readability, but such code duplication (I believe it is
a duplicate code) would never be acceptable to me.

I think you need to take a second look at the code you're producing
and realize that it's not so clean either. This extract from
populate_record_field, for example, is pretty horrid:

/* prepare column metadata cache for the given type */
if (col->typid != typid || col->typmod != typmod)
prepare_column_cache(col, typid, typmod, mcxt, jsv->is_json);

*isnull = jsv->is_json ? jsv->val.json.str == NULL
: jsv->val.jsonb == NULL ||
jsv->val.jsonb->type == jbvNull;

typcat = col->typcat;

/* try to convert json string to a non-scalar type through input function */
if ((jsv->is_json ? jsv->val.json.type == JSON_TOKEN_STRING
: jsv->val.jsonb &&
jsv->val.jsonb->type == jbvString) &&
(typcat == TYPECAT_ARRAY ||
typcat == TYPECAT_COMPOSITE))
typcat = TYPECAT_SCALAR;

/* we must perform domain checks for NULLs */
if (*isnull && typcat != TYPECAT_DOMAIN)
return (Datum) 0;

When every other line contains an is_json conditional, you are not making
good readable code. It's also worth noting that this is going to become
even less readable after pgindent gets done with it:

/* prepare column metadata cache for the given type */
if (col->typid != typid || col->typmod != typmod)
prepare_column_cache(col, typid, typmod, mcxt, jsv->is_json);

*isnull = jsv->is_json ? jsv->val.json.str == NULL
: jsv->val.jsonb == NULL ||
jsv->val.jsonb->type == jbvNull;

typcat = col->typcat;

/* try to convert json string to a non-scalar type through input function */
if ((jsv->is_json ? jsv->val.json.type == JSON_TOKEN_STRING
: jsv->val.jsonb &&
jsv->val.jsonb->type == jbvString) &&
(typcat == TYPECAT_ARRAY ||
typcat == TYPECAT_COMPOSITE))
typcat = TYPECAT_SCALAR;

/* we must perform domain checks for NULLs */
if (*isnull && typcat != TYPECAT_DOMAIN)
return (Datum) 0;

You could maybe improve that result with some different formatting
choices, but I think it's basically a readability fail to be relying
heavily on multiline x?y:z conditionals in the first place.

And I still maintain that it's entirely silly to be structuring
populate_scalar() this way.

So I really think it'd be a good idea to explore separating the json and
jsonb code paths. This direction isn't looking like a win.

7. More generally, the patch is hard to review because it whacks the
existing code around so thoroughly that it's tough even to match up
old and new code to get a handle on what you changed functionally.
Maybe it would be good if you could separate it into a refactoring
patch that just moves existing code around without functional changes,
and then a patch on top of that that adds code and makes only localized
changes in what's already there.

I've split this patch into two patches as you asked.

That didn't really help :-( ... the 0002 patch is still nigh unreadable.
Maybe it's trying to do too much in one step.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Nikita Glukhov
n.gluhov@postgrespro.ru
In reply to: Tom Lane (#10)
Re: PATCH: recursive json_populate_record()

On 25.01.2017 23:58, Tom Lane wrote:

I think you need to take a second look at the code you're producing
and realize that it's not so clean either. This extract from
populate_record_field, for example, is pretty horrid:

But what if we introduce some helper macros like this:

#define JsValueIsNull(jsv) \
((jsv)->is_json ? !(jsv)->val.json.str \
: !(jsv)->val.jsonb || (jsv)->val.jsonb->type == jbvNull)

#define JsValueIsString(jsv) \
((jsv)->is_json ? (jsv)->val.json.type == JSON_TOKEN_STRING \
: (jsv)->val.jsonb && (jsv)->val.jsonb->type == jbvString)

/* prepare column metadata cache for the given type */
if (col->typid != typid || col->typmod != typmod)
prepare_column_cache(col, typid, typmod, mcxt, jsv->is_json);

*isnull = JsValueIsNull(jsv);

typcat = col->typcat;

/* try to convert json string to a non-scalar type through input function */
if (JsValueIsString(jsv) &&
(typcat == TYPECAT_ARRAY || typcat == TYPECAT_COMPOSITE))
typcat = TYPECAT_SCALAR;

/* we must perform domain checks for NULLs */
if (*isnull && typcat != TYPECAT_DOMAIN)
return (Datum) 0;

--
Nikita Glukhov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Tom Lane
tgl@sss.pgh.pa.us
In reply to: Nikita Glukhov (#11)
Re: PATCH: recursive json_populate_record()

Nikita Glukhov <n.gluhov@postgrespro.ru> writes:

On 25.01.2017 23:58, Tom Lane wrote:

I think you need to take a second look at the code you're producing
and realize that it's not so clean either. This extract from
populate_record_field, for example, is pretty horrid:

But what if we introduce some helper macros like this:

#define JsValueIsNull(jsv) \
((jsv)->is_json ? !(jsv)->val.json.str \
: !(jsv)->val.jsonb || (jsv)->val.jsonb->type == jbvNull)

#define JsValueIsString(jsv) \
((jsv)->is_json ? (jsv)->val.json.type == JSON_TOKEN_STRING \
: (jsv)->val.jsonb && (jsv)->val.jsonb->type == jbvString)

Yeah, I was wondering about that too. I'm not sure that you can make
a reasonable set of helper macros that will fix this, but if you want
to try, go for it.

BTW, just as a stylistic thing, I find "a?b:c||d" unreadable: I have
to go back to the manual every darn time to convince myself whether
that means (a?b:c)||d or a?b:(c||d). It's better not to rely on
the reader (... or the author) having memorized C's precedence rules
in quite that much detail. Extra parens help.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#13Michael Paquier
michael@paquier.xyz
In reply to: Tom Lane (#12)
Re: PATCH: recursive json_populate_record()

On Thu, Jan 26, 2017 at 6:49 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Nikita Glukhov <n.gluhov@postgrespro.ru> writes:

On 25.01.2017 23:58, Tom Lane wrote:

I think you need to take a second look at the code you're producing
and realize that it's not so clean either. This extract from
populate_record_field, for example, is pretty horrid:

But what if we introduce some helper macros like this:

#define JsValueIsNull(jsv) \
((jsv)->is_json ? !(jsv)->val.json.str \
: !(jsv)->val.jsonb || (jsv)->val.jsonb->type == jbvNull)

#define JsValueIsString(jsv) \
((jsv)->is_json ? (jsv)->val.json.type == JSON_TOKEN_STRING \
: (jsv)->val.jsonb && (jsv)->val.jsonb->type == jbvString)

Yeah, I was wondering about that too. I'm not sure that you can make
a reasonable set of helper macros that will fix this, but if you want
to try, go for it.

BTW, just as a stylistic thing, I find "a?b:c||d" unreadable: I have
to go back to the manual every darn time to convince myself whether
that means (a?b:c)||d or a?b:(c||d). It's better not to rely on
the reader (... or the author) having memorized C's precedence rules
in quite that much detail. Extra parens help.

Moved to CF 2017-03 as discussion is going on and more review is
needed on the last set of patches.
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14David Steele
david@pgmasters.net
In reply to: Michael Paquier (#13)
Re: PATCH: recursive json_populate_record()

On 2/1/17 12:53 AM, Michael Paquier wrote:

On Thu, Jan 26, 2017 at 6:49 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Nikita Glukhov <n.gluhov@postgrespro.ru> writes:

On 25.01.2017 23:58, Tom Lane wrote:

I think you need to take a second look at the code you're producing
and realize that it's not so clean either. This extract from
populate_record_field, for example, is pretty horrid:

But what if we introduce some helper macros like this:

#define JsValueIsNull(jsv) \
((jsv)->is_json ? !(jsv)->val.json.str \
: !(jsv)->val.jsonb || (jsv)->val.jsonb->type == jbvNull)

#define JsValueIsString(jsv) \
((jsv)->is_json ? (jsv)->val.json.type == JSON_TOKEN_STRING \
: (jsv)->val.jsonb && (jsv)->val.jsonb->type == jbvString)

Yeah, I was wondering about that too. I'm not sure that you can make
a reasonable set of helper macros that will fix this, but if you want
to try, go for it.

BTW, just as a stylistic thing, I find "a?b:c||d" unreadable: I have
to go back to the manual every darn time to convince myself whether
that means (a?b:c)||d or a?b:(c||d). It's better not to rely on
the reader (... or the author) having memorized C's precedence rules
in quite that much detail. Extra parens help.

Moved to CF 2017-03 as discussion is going on and more review is
needed on the last set of patches.

The current patches do not apply cleanly at cccbdde:

$ git apply ../other/0001-introduce-JsonContainerXxx-macros-v04.patch
error: patch failed: src/backend/utils/adt/jsonb_util.c:328
error: src/backend/utils/adt/jsonb_util.c: patch does not apply
error: patch failed: src/backend/utils/adt/jsonfuncs.c:1266
error: src/backend/utils/adt/jsonfuncs.c: patch does not apply
error: patch failed: src/include/utils/jsonb.h:218
error: src/include/utils/jsonb.h: patch does not apply

In addition, it appears a number of suggestions have been made by Tom
that warrant new patches. Marked "Waiting on Author".

--
-David
david@pgmasters.net

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15David Steele
david@pgmasters.net
In reply to: David Steele (#14)
Re: PATCH: recursive json_populate_record()

On 3/16/17 11:54 AM, David Steele wrote:

On 2/1/17 12:53 AM, Michael Paquier wrote:

On Thu, Jan 26, 2017 at 6:49 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Nikita Glukhov <n.gluhov@postgrespro.ru> writes:

On 25.01.2017 23:58, Tom Lane wrote:

I think you need to take a second look at the code you're producing
and realize that it's not so clean either. This extract from
populate_record_field, for example, is pretty horrid:

But what if we introduce some helper macros like this:

#define JsValueIsNull(jsv) \
((jsv)->is_json ? !(jsv)->val.json.str \
: !(jsv)->val.jsonb || (jsv)->val.jsonb->type == jbvNull)

#define JsValueIsString(jsv) \
((jsv)->is_json ? (jsv)->val.json.type == JSON_TOKEN_STRING \
: (jsv)->val.jsonb && (jsv)->val.jsonb->type == jbvString)

Yeah, I was wondering about that too. I'm not sure that you can make
a reasonable set of helper macros that will fix this, but if you want
to try, go for it.

BTW, just as a stylistic thing, I find "a?b:c||d" unreadable: I have
to go back to the manual every darn time to convince myself whether
that means (a?b:c)||d or a?b:(c||d). It's better not to rely on
the reader (... or the author) having memorized C's precedence rules
in quite that much detail. Extra parens help.

Moved to CF 2017-03 as discussion is going on and more review is
needed on the last set of patches.

The current patches do not apply cleanly at cccbdde:

$ git apply ../other/0001-introduce-JsonContainerXxx-macros-v04.patch
error: patch failed: src/backend/utils/adt/jsonb_util.c:328
error: src/backend/utils/adt/jsonb_util.c: patch does not apply
error: patch failed: src/backend/utils/adt/jsonfuncs.c:1266
error: src/backend/utils/adt/jsonfuncs.c: patch does not apply
error: patch failed: src/include/utils/jsonb.h:218
error: src/include/utils/jsonb.h: patch does not apply

In addition, it appears a number of suggestions have been made by Tom
that warrant new patches. Marked "Waiting on Author".

This thread has been idle for months since Tom's review.

The submission has been marked "Returned with Feedback". Please feel
free to resubmit to a future commitfest.

Thanks,
--
-David
david@pgmasters.net

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Andrew Dunstan
andrew@dunslane.net
In reply to: David Steele (#15)
Re: PATCH: recursive json_populate_record()

On 03/21/2017 01:37 PM, David Steele wrote:

On 3/16/17 11:54 AM, David Steele wrote:

On 2/1/17 12:53 AM, Michael Paquier wrote:

On Thu, Jan 26, 2017 at 6:49 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Nikita Glukhov <n.gluhov@postgrespro.ru> writes:

On 25.01.2017 23:58, Tom Lane wrote:

I think you need to take a second look at the code you're producing
and realize that it's not so clean either. This extract from
populate_record_field, for example, is pretty horrid:

But what if we introduce some helper macros like this:

#define JsValueIsNull(jsv) \
((jsv)->is_json ? !(jsv)->val.json.str \
: !(jsv)->val.jsonb || (jsv)->val.jsonb->type == jbvNull)

#define JsValueIsString(jsv) \
((jsv)->is_json ? (jsv)->val.json.type == JSON_TOKEN_STRING \
: (jsv)->val.jsonb && (jsv)->val.jsonb->type == jbvString)

Yeah, I was wondering about that too. I'm not sure that you can make
a reasonable set of helper macros that will fix this, but if you want
to try, go for it.

BTW, just as a stylistic thing, I find "a?b:c||d" unreadable: I have
to go back to the manual every darn time to convince myself whether
that means (a?b:c)||d or a?b:(c||d). It's better not to rely on
the reader (... or the author) having memorized C's precedence rules
in quite that much detail. Extra parens help.

Moved to CF 2017-03 as discussion is going on and more review is
needed on the last set of patches.

The current patches do not apply cleanly at cccbdde:

$ git apply ../other/0001-introduce-JsonContainerXxx-macros-v04.patch
error: patch failed: src/backend/utils/adt/jsonb_util.c:328
error: src/backend/utils/adt/jsonb_util.c: patch does not apply
error: patch failed: src/backend/utils/adt/jsonfuncs.c:1266
error: src/backend/utils/adt/jsonfuncs.c: patch does not apply
error: patch failed: src/include/utils/jsonb.h:218
error: src/include/utils/jsonb.h: patch does not apply

In addition, it appears a number of suggestions have been made by Tom
that warrant new patches. Marked "Waiting on Author".

This thread has been idle for months since Tom's review.

The submission has been marked "Returned with Feedback". Please feel
free to resubmit to a future commitfest.

Please revive. I am planning to look at this later this week.

cheers

andrew

--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#17David Steele
david@pgmasters.net
In reply to: Andrew Dunstan (#16)
Re: PATCH: recursive json_populate_record()

On 3/21/17 2:31 PM, Andrew Dunstan wrote:

On 03/21/2017 01:37 PM, David Steele wrote:

This thread has been idle for months since Tom's review.

The submission has been marked "Returned with Feedback". Please feel
free to resubmit to a future commitfest.

Please revive. I am planning to look at this later this week.

Revived in "Waiting on Author" state.

--
-David
david@pgmasters.net

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#18Nikita Glukhov
n.gluhov@postgrespro.ru
In reply to: David Steele (#17)
Re: PATCH: recursive json_populate_record()

On 22.03.2017 00:26, David Steele wrote:

On 3/21/17 2:31 PM, Andrew Dunstan wrote:

On 03/21/2017 01:37 PM, David Steele wrote:

This thread has been idle for months since Tom's review.

The submission has been marked "Returned with Feedback". Please feel
free to resubmit to a future commitfest.

Please revive. I am planning to look at this later this week.

Revived in "Waiting on Author" state.

Here is updated v05 version of this patch:
* rebased to the latest master
* one patch is completely removed because it is unnecessary now
* added some macros for JsValue fields access
* added new JsObject structure for passing json/jsonb objects
* refactoring patch is not yet simplified (not broken into a
step-by-step sequence)

Also I must notice that json branch of this code is not as optimal as it
might be:
there could be repetitive parsing passes for nested json objects/arrays
instead of a single parsing pass.

--
Nikita Glukhov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

0001-refactor-json_populate_record-v05.patchtext/x-patch; name=0001-refactor-json_populate_record-v05.patchDownload+516-455
0002-recursive-json_populate_record-v05.patchtext/x-patch; name=0002-recursive-json_populate_record-v05.patchDownload+1805-40
#19Andres Freund
andres@anarazel.de
In reply to: Andrew Dunstan (#16)
Re: PATCH: recursive json_populate_record()

On 2017-03-21 14:31:08 -0400, Andrew Dunstan wrote:

On 03/21/2017 01:37 PM, David Steele wrote:

On 3/16/17 11:54 AM, David Steele wrote:

On 2/1/17 12:53 AM, Michael Paquier wrote:

On Thu, Jan 26, 2017 at 6:49 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Nikita Glukhov <n.gluhov@postgrespro.ru> writes:

On 25.01.2017 23:58, Tom Lane wrote:

I think you need to take a second look at the code you're producing
and realize that it's not so clean either. This extract from
populate_record_field, for example, is pretty horrid:

But what if we introduce some helper macros like this:

#define JsValueIsNull(jsv) \
((jsv)->is_json ? !(jsv)->val.json.str \
: !(jsv)->val.jsonb || (jsv)->val.jsonb->type == jbvNull)

#define JsValueIsString(jsv) \
((jsv)->is_json ? (jsv)->val.json.type == JSON_TOKEN_STRING \
: (jsv)->val.jsonb && (jsv)->val.jsonb->type == jbvString)

Yeah, I was wondering about that too. I'm not sure that you can make
a reasonable set of helper macros that will fix this, but if you want
to try, go for it.

BTW, just as a stylistic thing, I find "a?b:c||d" unreadable: I have
to go back to the manual every darn time to convince myself whether
that means (a?b:c)||d or a?b:(c||d). It's better not to rely on
the reader (... or the author) having memorized C's precedence rules
in quite that much detail. Extra parens help.

Moved to CF 2017-03 as discussion is going on and more review is
needed on the last set of patches.

The current patches do not apply cleanly at cccbdde:

$ git apply ../other/0001-introduce-JsonContainerXxx-macros-v04.patch
error: patch failed: src/backend/utils/adt/jsonb_util.c:328
error: src/backend/utils/adt/jsonb_util.c: patch does not apply
error: patch failed: src/backend/utils/adt/jsonfuncs.c:1266
error: src/backend/utils/adt/jsonfuncs.c: patch does not apply
error: patch failed: src/include/utils/jsonb.h:218
error: src/include/utils/jsonb.h: patch does not apply

In addition, it appears a number of suggestions have been made by Tom
that warrant new patches. Marked "Waiting on Author".

This thread has been idle for months since Tom's review.

The submission has been marked "Returned with Feedback". Please feel
free to resubmit to a future commitfest.

Please revive. I am planning to look at this later this week.

Since that was 10 days ago - you're still planning to get this in before
the freeze?

- Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#20Andrew Dunstan
andrew@dunslane.net
In reply to: Andres Freund (#19)
Re: PATCH: recursive json_populate_record()

On 04/03/2017 05:17 PM, Andres Freund wrote:

On 2017-03-21 14:31:08 -0400, Andrew Dunstan wrote:

On 03/21/2017 01:37 PM, David Steele wrote:

On 3/16/17 11:54 AM, David Steele wrote:

On 2/1/17 12:53 AM, Michael Paquier wrote:

On Thu, Jan 26, 2017 at 6:49 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Nikita Glukhov <n.gluhov@postgrespro.ru> writes:

On 25.01.2017 23:58, Tom Lane wrote:

I think you need to take a second look at the code you're producing
and realize that it's not so clean either. This extract from
populate_record_field, for example, is pretty horrid:

But what if we introduce some helper macros like this:
#define JsValueIsNull(jsv) \
((jsv)->is_json ? !(jsv)->val.json.str \
: !(jsv)->val.jsonb || (jsv)->val.jsonb->type == jbvNull)
#define JsValueIsString(jsv) \
((jsv)->is_json ? (jsv)->val.json.type == JSON_TOKEN_STRING \
: (jsv)->val.jsonb && (jsv)->val.jsonb->type == jbvString)

Yeah, I was wondering about that too. I'm not sure that you can make
a reasonable set of helper macros that will fix this, but if you want
to try, go for it.

BTW, just as a stylistic thing, I find "a?b:c||d" unreadable: I have
to go back to the manual every darn time to convince myself whether
that means (a?b:c)||d or a?b:(c||d). It's better not to rely on
the reader (... or the author) having memorized C's precedence rules
in quite that much detail. Extra parens help.

Moved to CF 2017-03 as discussion is going on and more review is
needed on the last set of patches.

The current patches do not apply cleanly at cccbdde:

$ git apply ../other/0001-introduce-JsonContainerXxx-macros-v04.patch
error: patch failed: src/backend/utils/adt/jsonb_util.c:328
error: src/backend/utils/adt/jsonb_util.c: patch does not apply
error: patch failed: src/backend/utils/adt/jsonfuncs.c:1266
error: src/backend/utils/adt/jsonfuncs.c: patch does not apply
error: patch failed: src/include/utils/jsonb.h:218
error: src/include/utils/jsonb.h: patch does not apply

In addition, it appears a number of suggestions have been made by Tom
that warrant new patches. Marked "Waiting on Author".

This thread has been idle for months since Tom's review.

The submission has been marked "Returned with Feedback". Please feel
free to resubmit to a future commitfest.

Please revive. I am planning to look at this later this week.

Since that was 10 days ago - you're still planning to get this in before
the freeze?

Hoping to, other demands have intervened a bit, but I might be able to.

cheers

andrew

--

Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#21Andrew Dunstan
andrew@dunslane.net
In reply to: Andrew Dunstan (#20)
#22Andrew Dunstan
andrew@dunslane.net
In reply to: Andrew Dunstan (#21)
#23Nikita Glukhov
n.gluhov@postgrespro.ru
In reply to: Andrew Dunstan (#22)
#24Tom Lane
tgl@sss.pgh.pa.us
In reply to: Nikita Glukhov (#23)
#25Noah Misch
noah@leadboat.com
In reply to: Nikita Glukhov (#23)
#26Tom Lane
tgl@sss.pgh.pa.us
In reply to: Noah Misch (#25)
#27Nikita Glukhov
n.gluhov@postgrespro.ru
In reply to: Tom Lane (#24)