Small issues with CREATE TABLE COMPRESSION

Started by Michael Paquieralmost 5 years ago28 messageshackers
Jump to latest
#1Michael Paquier
michael@paquier.xyz

Hi all,

I have been looking at and testing the patch set for CREATE TABLE
COMPRESSION, and spotted a couple of things in parallel of some work
done by Jacob (added in CC).

The behavior around CREATE TABLE AS and matviews is a bit confusing,
and not documented. First, at the grammar level, it is not possible
to specify which compression option is used per column when creating
the relation. So, all the relation columns would just set a column's
compression to be default_toast_compression for all the toastable
columns of the relation. That's not enforceable at column level when
the relation is created, except with a follow-up ALTER TABLE. That's
similar to STORAGE when it comes to matviews, but these are at least
documented.

And so, ALTER MATERIALIZED VIEW supports SET COMPRESSION but this is
not mentioned in its docs:
https://www.postgresql.org/docs/devel/sql-altermaterializedview.html
psql could have tab completion support for that.

There are no tests in pg_dump to make sure that some ALTER
MATERIALIZED VIEW or ALTER TABLE commands are generated when the
compression of a matview's or table's column is changed. This depends
on the value of default_toast_compression, but that would be nice to
have something, and get at least some coverage with
--no-toast-compression. You would need to make the tests conditional
here, for example with check_pg_config() (see for example what's done
with channel binding in ssl/t/002_scram.pl).

Another thing is the handling of the per-value compression that could
be confusing to the user. As no materialization of the data is done
for a CTAS or a matview, and the regression tests of compression.sql
track that AFAIK, there can be a mix of toast values compressed with
lz4 or pglz, with pg_attribute.attcompression being one or the other.

Now, we don't really document any of that, and the per-column
compression value would be set to default_toast_compression while the
stored values may use a mix of the compression methods, depending on
where the toasted values come from. If this behavior is intended, this
makes me wonder in what the possibility to set the compression for a
materialized view column is useful for except for a logical
dump/restore? As of HEAD we'd just insert the toasted value from the
origin as-is so the compression of the column has no effect at all.
Another thing here is the inconsistency that this brings with pg_dump.
For example, as the dumped values are decompressed, we could have
values compressed with pglz at the origin, with a column using lz4
within its definition that would make everything compressed with lz4
once the values are restored. This choice may be fine, but I think
that it would be good to document all that. That would be less
surprising to the user.

Similarly, we may want to document that COMPRESSION does not trigger a
table rewrite, but that it is effective only for the new toast values
inserted if a tuple is rebuilt and rewritten?

Would it be better to document that pg_column_compression() returns
NULL if the column is not a toastable type or if the column's value is
not compressed?

The flexibility with allow_system_table_mods allows one to change the
compression method of catalogs, for example switching rolpassword with
a SCRAM verifier large enough to be toasted would lock an access to
the cluster if restarting the server without lz4 support. I shouldn't
have done that but I did, and I like it :)

The design used by this feature is pretty cool, as long as you don't
read the compressed values, physical replication can work out of the
box even across nodes that are built with or without lz4.

Thanks,
--
Michael

#2Justin Pryzby
pryzby@telsasoft.com
In reply to: Michael Paquier (#1)
Re: Small issues with CREATE TABLE COMPRESSION

On Tue, Apr 27, 2021 at 03:22:25PM +0900, Michael Paquier wrote:

Hi all,

I have been looking at and testing the patch set for CREATE TABLE
COMPRESSION, and spotted a couple of things in parallel of some work
done by Jacob (added in CC).

The behavior around CREATE TABLE AS and matviews is a bit confusing,
and not documented. First, at the grammar level, it is not possible
to specify which compression option is used per column when creating
the relation. So, all the relation columns would just set a column's
compression to be default_toast_compression for all the toastable
columns of the relation. That's not enforceable at column level when
the relation is created, except with a follow-up ALTER TABLE. That's
similar to STORAGE when it comes to matviews, but these are at least
documented.

And so, ALTER MATERIALIZED VIEW supports SET COMPRESSION but this is
not mentioned in its docs:
https://www.postgresql.org/docs/devel/sql-altermaterializedview.html

psql could have tab completion support for that.

Actually ALTER matview ALTER col has no tab completion at all, right ?

Now, we don't really document any of that, and the per-column
compression value would be set to default_toast_compression while the
stored values may use a mix of the compression methods, depending on
where the toasted values come from. If this behavior is intended, this
makes me wonder in what the possibility to set the compression for a
materialized view column is useful for except for a logical
dump/restore? As of HEAD we'd just insert the toasted value from the
origin as-is so the compression of the column has no effect at all.

That may be true if the mat view is trivial, but not true if it has
expressions. The mat view column may be built on multiple table columns, or be
of a different type than the columns it's built on top of, so the relationship
may not be so direct.

Another thing here is the inconsistency that this brings with pg_dump.
For example, as the dumped values are decompressed, we could have
values compressed with pglz at the origin, with a column using lz4
within its definition that would make everything compressed with lz4
once the values are restored. This choice may be fine, but I think
that it would be good to document all that. That would be less
surprising to the user.

Can you suggest what or where we'd say it? I think this is just the behavior
that pg_dump shouldn't lose the user's compression setting.

The setting itself is for "future" data, and the only way to guarantee what
compression types are in use are by vacuum full/cluster or pg_dump restore.

Similarly, we may want to document that COMPRESSION does not trigger a
table rewrite, but that it is effective only for the new toast values
inserted if a tuple is rebuilt and rewritten?

Good point. I started with this.

diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index 39927be41e..8cceea41d0 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -391,7 +391,21 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
     </term>
     <listitem>
      <para>
-      This sets the compression method for a column.  The supported compression
+      This sets the compression method to be used for data inserted into a column.
+
+      This does not cause the table to be rewritten, so existing data may still
+      be compressed with other compression methods.  If the table is rewritten with
+      <command>VACUUM FULL</command> or <command>CLUSTER</command>, or restored
+      with <application>pg_restore</application>, then all tuples are rewritten
+      with the configured compression methods.
+
+      Also, note that when data is inserted from another relation (for example,
+      by <command>INSERT ... SELECT</command>), tuples from the source data are
+      not necessarily detoasted, and any previously compressed data is retained
+      with its existing compression method, rather than recompressing with the
+      compression methods of the target columns.
+
+      The supported compression
       methods are <literal>pglz</literal> and <literal>lz4</literal>.
       <literal>lz4</literal> is available only if <literal>--with-lz4</literal>
       was used when building <productname>PostgreSQL</productname>.
#3Robert Haas
robertmhaas@gmail.com
In reply to: Michael Paquier (#1)
Re: Small issues with CREATE TABLE COMPRESSION

Hi,

My analysis of this open item is there are no code-level issues here,
but there is one line of documentation that clearly got forgotten,
some other documentation changes that might be nice, and maybe someone
wants to work more on testing and/or tab completion at some point.

On Tue, Apr 27, 2021 at 2:22 AM Michael Paquier <michael@paquier.xyz> wrote:

The behavior around CREATE TABLE AS and matviews is a bit confusing,
and not documented.

It's no different from a bunch of other column properties that you
also can't set when creating a materialized view. I don't think this
patch created this problem, or that it is responsible for solving it.

And so, ALTER MATERIALIZED VIEW supports SET COMPRESSION but this is
not mentioned in its docs:

I agree that's an oversight and should be fixed.

https://www.postgresql.org/docs/devel/sql-altermaterializedview.html
psql could have tab completion support for that.

I don't believe it's our policy that incomplete tab completion support
rises to the level of an open item, especially given that, as Justin
points out, it's not supported for ALTER MATERIALIZED VIEW name ALTER
COLUMN name <tab> doesn't complete anything *at all*.

There are no tests in pg_dump to make sure that some ALTER
MATERIALIZED VIEW or ALTER TABLE commands are generated when the
compression of a matview's or table's column is changed.

True, but it does seem to work. I am happy if you or anyone want to
write some tests.

Another thing is the handling of the per-value compression that could
be confusing to the user. As no materialization of the data is done
for a CTAS or a matview, and the regression tests of compression.sql
track that AFAIK, there can be a mix of toast values compressed with
lz4 or pglz, with pg_attribute.attcompression being one or the other.

Yes. This is mentioned in the commit message, and was discussed
extensively on the original thread. We probably should have included
it in the documentation, as well. Justin's text seems fairly
reasonable to me.

Similarly, we may want to document that COMPRESSION does not trigger a
table rewrite, but that it is effective only for the new toast values
inserted if a tuple is rebuilt and rewritten?

Sure. I think Justin's text covers this, too.

Would it be better to document that pg_column_compression() returns
NULL if the column is not a toastable type or if the column's value is
not compressed?

We can.

Here's a proposed patch for the documentation issues not covered by
Justin's proposal.

--
Robert Haas
EDB: http://www.enterprisedb.com

Attachments:

compression-doc-fixes-v1.patchapplication/octet-stream; name=compression-doc-fixes-v1.patchDownload+3-1
#4Dilip Kumar
dilipbalaut@gmail.com
In reply to: Justin Pryzby (#2)
Re: Small issues with CREATE TABLE COMPRESSION

On Thu, Apr 29, 2021 at 9:31 AM Justin Pryzby <pryzby@telsasoft.com> wrote:

On Tue, Apr 27, 2021 at 03:22:25PM +0900, Michael Paquier wrote:

Hi all,

And so, ALTER MATERIALIZED VIEW supports SET COMPRESSION but this is
not mentioned in its docs:
https://www.postgresql.org/docs/devel/sql-altermaterializedview.html

psql could have tab completion support for that.

Actually ALTER matview ALTER col has no tab completion at all, right ?

Right.

Good point. I started with this.

diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index 39927be41e..8cceea41d0 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -391,7 +391,21 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
</term>
<listitem>
<para>
-      This sets the compression method for a column.  The supported compression
+      This sets the compression method to be used for data inserted into a column.
+
+      This does not cause the table to be rewritten, so existing data may still
+      be compressed with other compression methods.  If the table is rewritten with
+      <command>VACUUM FULL</command> or <command>CLUSTER</command>, or restored
+      with <application>pg_restore</application>, then all tuples are rewritten
+      with the configured compression methods.
+
+      Also, note that when data is inserted from another relation (for example,
+      by <command>INSERT ... SELECT</command>), tuples from the source data are
+      not necessarily detoasted, and any previously compressed data is retained
+      with its existing compression method, rather than recompressing with the
+      compression methods of the target columns.
+
+      The supported compression
methods are <literal>pglz</literal> and <literal>lz4</literal>.
<literal>lz4</literal> is available only if <literal>--with-lz4</literal>
was used when building <productname>PostgreSQL</productname>.

Your documentation looks fine to me.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#5Dilip Kumar
dilipbalaut@gmail.com
In reply to: Robert Haas (#3)
Re: Small issues with CREATE TABLE COMPRESSION

On Wed, May 5, 2021 at 12:06 AM Robert Haas <robertmhaas@gmail.com> wrote:

There are no tests in pg_dump to make sure that some ALTER
MATERIALIZED VIEW or ALTER TABLE commands are generated when the
compression of a matview's or table's column is changed.

True, but it does seem to work. I am happy if you or anyone want to
write some tests.

I think it will be really hard to generate such a test in pg_dump,
because default we are compiling --without-lz4, which means we have
only one compression option available, and if there is only one option
available then the column compression method and the default
compression method will be same so the dump will not generate an extra
command of type ALTER TABLE... SET COMPRESSION.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#6Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#5)
Re: Small issues with CREATE TABLE COMPRESSION

On Wed, May 5, 2021 at 11:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Wed, May 5, 2021 at 12:06 AM Robert Haas <robertmhaas@gmail.com> wrote:

There are no tests in pg_dump to make sure that some ALTER
MATERIALIZED VIEW or ALTER TABLE commands are generated when the
compression of a matview's or table's column is changed.

True, but it does seem to work. I am happy if you or anyone want to
write some tests.

I think it will be really hard to generate such a test in pg_dump,
because default we are compiling --without-lz4, which means we have
only one compression option available, and if there is only one option
available then the column compression method and the default
compression method will be same so the dump will not generate an extra
command of type ALTER TABLE... SET COMPRESSION.

I think we already have such test cases at least through pg_upgrade.
Basically, if you see in compression.sql we are not dropping the table
so that pg_upgrade and dump them and test. So if test run --with-lz4
then in pg_upgrade dump we can see ALTER TABLE... SET COMPRESSION type
of commands.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#7Michael Paquier
michael@paquier.xyz
In reply to: Dilip Kumar (#6)
Re: Small issues with CREATE TABLE COMPRESSION

On Wed, May 05, 2021 at 01:41:03PM +0530, Dilip Kumar wrote:

I think we already have such test cases at least through pg_upgrade.
Basically, if you see in compression.sql we are not dropping the table
so that pg_upgrade and dump them and test. So if test run --with-lz4
then in pg_upgrade dump we can see ALTER TABLE... SET COMPRESSION type
of commands.

The TAP tests of pg_dump are much more picky than what pg_upgrade is
able to do. With the existing set of tests in place, what you are
able to detect is that pg_upgrade does not *break* if there are tables
with attributes using various compression types, but that would not be
enough to make sure that the correct compression method is *applied*
depending on the context expected (default_toast_compression + the
attribute compression + pg_dump options), which is what the TAP tests
of pg_dump are able to correctly detect if extended in an appropriate
way.

With what's on HEAD, we would easily miss any bugs introduced in
pg_dump that change the set of commands generated depending on the
options given by a user, but still allow pg_upgrade to work correctly.
For example, there could be issues where we finish by setting up the
incorrect compression option, with pg_upgrade happily finishing.
There is a gap in the test coverage here.
--
Michael

#8Dilip Kumar
dilipbalaut@gmail.com
In reply to: Michael Paquier (#7)
Re: Small issues with CREATE TABLE COMPRESSION

On Wed, May 5, 2021 at 4:00 PM Michael Paquier <michael@paquier.xyz> wrote:

On Wed, May 05, 2021 at 01:41:03PM +0530, Dilip Kumar wrote:

I think we already have such test cases at least through pg_upgrade.
Basically, if you see in compression.sql we are not dropping the table
so that pg_upgrade and dump them and test. So if test run --with-lz4
then in pg_upgrade dump we can see ALTER TABLE... SET COMPRESSION type
of commands.

The TAP tests of pg_dump are much more picky than what pg_upgrade is
able to do. With the existing set of tests in place, what you are
able to detect is that pg_upgrade does not *break* if there are tables
with attributes using various compression types, but that would not be
enough to make sure that the correct compression method is *applied*
depending on the context expected (default_toast_compression + the
attribute compression + pg_dump options), which is what the TAP tests
of pg_dump are able to correctly detect if extended in an appropriate
way.

Okay, got your point.

With what's on HEAD, we would easily miss any bugs introduced in
pg_dump that change the set of commands generated depending on the
options given by a user, but still allow pg_upgrade to work correctly.
For example, there could be issues where we finish by setting up the
incorrect compression option, with pg_upgrade happily finishing.
There is a gap in the test coverage here.

Basically, the problem is default compilation is --without-lz4 so by
default there is only one compression method and with only one
compression method we can not generate the test case you asked for,
because that will be the default compression method and we don't dump
the default compression method.

So basically, if we have to write this test case in pg_dump then we
will have to use lz4 which means it will generate different output
--with-lz4 vs --without-lz4. With a simple regress test it easy to
deal with such cases by keeping multiple .out files but I am not sure
can we do this easily with pg_dump test without adding much
complexity?

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#9Robert Haas
robertmhaas@gmail.com
In reply to: Dilip Kumar (#8)
Re: Small issues with CREATE TABLE COMPRESSION

On Wed, May 5, 2021 at 7:09 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

So basically, if we have to write this test case in pg_dump then we
will have to use lz4 which means it will generate different output
--with-lz4 vs --without-lz4. With a simple regress test it easy to
deal with such cases by keeping multiple .out files but I am not sure
can we do this easily with pg_dump test without adding much
complexity?

TAP tests have a facility for conditionally skipping tests; see
perldoc Test::More. That's actually superior to what you can do with
pg_regress. We'd need to come up with some logic to determine when to
skip or not, though. Perhaps the easiest method would be to have the
relevant Perl script try to create a table with an lz4 column. If that
works, then perform the LZ4-based tests. If it fails, check the error
message. If it says anything that LZ4 is not supported by this build,
skip those tests. If it says anything else, die.

--
Robert Haas
EDB: http://www.enterprisedb.com

#10Michael Paquier
michael@paquier.xyz
In reply to: Robert Haas (#9)
Re: Small issues with CREATE TABLE COMPRESSION

On Wed, May 05, 2021 at 09:59:41AM -0400, Robert Haas wrote:

TAP tests have a facility for conditionally skipping tests; see
perldoc Test::More. That's actually superior to what you can do with
pg_regress. We'd need to come up with some logic to determine when to
skip or not, though. Perhaps the easiest method would be to have the
relevant Perl script try to create a table with an lz4 column. If that
works, then perform the LZ4-based tests. If it fails, check the error
message. If it says anything that LZ4 is not supported by this build,
skip those tests. If it says anything else, die.

There is a simpler and cheaper method to make the execution of TAP
test conditional. As in src/test/ssl/t/002_scram.pl for channel
binding, I think that you could use something like
check_pg_config("#define HAVE_LIBLZ4 1") and use its result to decide
which tests to skip or not.
--
Michael

#11Dilip Kumar
dilipbalaut@gmail.com
In reply to: Michael Paquier (#10)
Re: Small issues with CREATE TABLE COMPRESSION

On Thu, May 6, 2021 at 5:35 AM Michael Paquier <michael@paquier.xyz> wrote:

On Wed, May 05, 2021 at 09:59:41AM -0400, Robert Haas wrote:

TAP tests have a facility for conditionally skipping tests; see
perldoc Test::More. That's actually superior to what you can do with
pg_regress. We'd need to come up with some logic to determine when to
skip or not, though. Perhaps the easiest method would be to have the
relevant Perl script try to create a table with an lz4 column. If that
works, then perform the LZ4-based tests. If it fails, check the error
message. If it says anything that LZ4 is not supported by this build,
skip those tests. If it says anything else, die.

There is a simpler and cheaper method to make the execution of TAP
test conditional. As in src/test/ssl/t/002_scram.pl for channel
binding, I think that you could use something like
check_pg_config("#define HAVE_LIBLZ4 1") and use its result to decide
which tests to skip or not.

Thanks, Robert and Michael for your input. I will try to understand
how it is done in the example shared by you and come up with the test
once I get time. I assume this is not something urgent.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#12Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#11)
Re: Small issues with CREATE TABLE COMPRESSION

On Thu, May 6, 2021 at 10:45 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

I noticed that the error code for invalid compression method is not
perfect, basically when we pass the invalid compression method during
CREATE/ALTER table that time we give
ERRCODE_FEATURE_NOT_SUPPORTED. I think the correct error code is
ERRCODE_INVALID_PARAMETER_VALUE. I have attached a patch to fix this.

I thought of starting a new thread first but then I thought the
subject of this thread is quite generic and this is a fairly small fix
so we can use the same thread.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

Attachments:

0001-Fix-thinko-in-invalid-compression-error-code.patchtext/x-patch; charset=US-ASCII; name=0001-Fix-thinko-in-invalid-compression-error-code.patchDownload+1-2
#13Michael Paquier
michael@paquier.xyz
In reply to: Dilip Kumar (#12)
Re: Small issues with CREATE TABLE COMPRESSION

On Thu, May 06, 2021 at 05:01:23PM +0530, Dilip Kumar wrote:

I noticed that the error code for invalid compression method is not
perfect, basically when we pass the invalid compression method during
CREATE/ALTER table that time we give
ERRCODE_FEATURE_NOT_SUPPORTED. I think the correct error code is
ERRCODE_INVALID_PARAMETER_VALUE. I have attached a patch to fix this.

Yeah, I agree that this is an improvement, so let's fix this.
--
Michael

#14Michael Paquier
michael@paquier.xyz
In reply to: Dilip Kumar (#11)
Re: Small issues with CREATE TABLE COMPRESSION

On Thu, May 06, 2021 at 10:45:53AM +0530, Dilip Kumar wrote:

Thanks, Robert and Michael for your input. I will try to understand
how it is done in the example shared by you and come up with the test
once I get time. I assume this is not something urgent.

Thanks. FWIW, I'd rather see this gap closed asap, as features should
come with proper tests IMO.

While on it, I can see that there is no support for --with-lz4 in the
MSVC scripts. I think that this is something where we had better
close the gap, and upstream provides binaries on Windows on their
release page:
https://github.com/lz4/lz4/releases

And I am familiar with both areas, so I have no helping out and
getting that in shape correctly before beta1.
--
Michael

#15Dilip Kumar
dilipbalaut@gmail.com
In reply to: Michael Paquier (#14)
Re: Small issues with CREATE TABLE COMPRESSION

On Thu, May 6, 2021 at 5:42 PM Michael Paquier <michael@paquier.xyz> wrote:

On Thu, May 06, 2021 at 10:45:53AM +0530, Dilip Kumar wrote:

Thanks, Robert and Michael for your input. I will try to understand
how it is done in the example shared by you and come up with the test
once I get time. I assume this is not something urgent.

Thanks. FWIW, I'd rather see this gap closed asap, as features should
come with proper tests IMO.

I have done this please find the attached patch.

While on it, I can see that there is no support for --with-lz4 in the
MSVC scripts. I think that this is something where we had better
close the gap, and upstream provides binaries on Windows on their
release page:
https://github.com/lz4/lz4/releases

And I am familiar with both areas, so I have no helping out and
getting that in shape correctly before beta1.

I don't have much idea about the MSVC script, but I will try to see
some other parameters and fix this.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

Attachments:

v1-0001-Add-pg_dump-test-case-for-TOAST-compression.patchtext/x-patch; charset=US-ASCII; name=v1-0001-Add-pg_dump-test-case-for-TOAST-compression.patchDownload+39-1
#16Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#13)
Re: Small issues with CREATE TABLE COMPRESSION

On Thu, May 06, 2021 at 09:04:57PM +0900, Michael Paquier wrote:

Yeah, I agree that this is an improvement, so let's fix this.

Just noticed that this was not applied yet, so done while I was
looking at this thread again.
--
Michael

#17Dilip Kumar
dilipbalaut@gmail.com
In reply to: Michael Paquier (#16)
Re: Small issues with CREATE TABLE COMPRESSION

On Sat, May 8, 2021 at 7:04 AM Michael Paquier <michael@paquier.xyz> wrote:

On Thu, May 06, 2021 at 09:04:57PM +0900, Michael Paquier wrote:

Yeah, I agree that this is an improvement, so let's fix this.

Just noticed that this was not applied yet, so done while I was
looking at this thread again.

Thanks!

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#18Michael Paquier
michael@paquier.xyz
In reply to: Dilip Kumar (#15)
Re: Small issues with CREATE TABLE COMPRESSION

On Thu, May 06, 2021 at 09:33:53PM +0530, Dilip Kumar wrote:

On Thu, May 6, 2021 at 5:42 PM Michael Paquier <michael@paquier.xyz> wrote:

On Thu, May 06, 2021 at 10:45:53AM +0530, Dilip Kumar wrote:

Thanks, Robert and Michael for your input. I will try to understand
how it is done in the example shared by you and come up with the test
once I get time. I assume this is not something urgent.

Thanks. FWIW, I'd rather see this gap closed asap, as features should
come with proper tests IMO.

I have done this please find the attached patch.

No objections to take the approach to mark the lz4-related test with a
special flag and skip them. I have three comments:
- It would be good to document this new flag. See the comment block
on top of %dump_test_schema_runs.
- There should be a test for --no-toast-compression. You can add a
new command in %pgdump_runs, then unmatch the expected output with the
option.
- I would add one test case with COMPRESSION pglz somewhere to check
after the case of ALTER TABLE COMPRESSION commands not generated as
this depends on default_toast_compression. A second test I'd add is a
materialized view with a column switched to use lz4 as compression
method with an extra ALTER command in create_sql.

I don't have much idea about the MSVC script, but I will try to see
some other parameters and fix this.

Thanks! I can dive into that if that's an issue. Let's make things
compatible with what upstream provides, meaning that we should have
some documentation pointing to the location of their deliverables,
equally to what we do for the Perl and OpenSSL dependencies for
example.
--
Michael

#19Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#18)
Re: Small issues with CREATE TABLE COMPRESSION

On Sat, May 08, 2021 at 05:37:58PM +0900, Michael Paquier wrote:

Thanks! I can dive into that if that's an issue. Let's make things
compatible with what upstream provides, meaning that we should have
some documentation pointing to the location of their deliverables,
equally to what we do for the Perl and OpenSSL dependencies for
example.

Dilip has sent me a patch set without adding pgsql-hackers in CC (I
guess these will be available soon). Anyway, this patch included a
change to fix a hole in the installation docs, where --with-lz4 is not
listed yet. I have reviewed that stuff and found more
inconsistencies in the docs, leading me to the attached:
- The upstream project name is "LZ4", so we had better use the correct
name when not referring to the option value used in CREATE/ALTER
TABLE.
- doc/src/sgml/installation.sgml misses a description for --with-lz4.

Without the Windows changes, I am finishing with the attached to close
the loop with the docs.

Thanks,
--
Michael

Attachments:

fix-lz4-docs.patchtext/x-diff; charset=us-asciiDownload+23-4
#20Justin Pryzby
pryzby@telsasoft.com
In reply to: Michael Paquier (#19)
Re: Small issues with CREATE TABLE COMPRESSION

On Sat, May 08, 2021 at 10:13:09PM +0900, Michael Paquier wrote:

+      You need <productname>LZ4</productname>, if you want to support
+      the compression of data with this method; see
+      <xref linkend="sql-createtable"/>.

I suggest to change "the compression" to "compression".
I would write the whole thing like:
| The LZ4 library is needed to support compression of data using that method...

+         Build with <productname>LZ4</productname> compression support.
+         This allows the use of <productname>LZ4</productname> for the
+         compression of table data.   

remove "the"

--
Justin

#21Justin Pryzby
pryzby@telsasoft.com
In reply to: Justin Pryzby (#20)
#22Dilip Kumar
dilipbalaut@gmail.com
In reply to: Michael Paquier (#19)
#23Dilip Kumar
dilipbalaut@gmail.com
In reply to: Michael Paquier (#18)
#24Michael Paquier
michael@paquier.xyz
In reply to: Justin Pryzby (#21)
#25Michael Paquier
michael@paquier.xyz
In reply to: Dilip Kumar (#23)
#26Dilip Kumar
dilipbalaut@gmail.com
In reply to: Michael Paquier (#25)
#27Michael Paquier
michael@paquier.xyz
In reply to: Dilip Kumar (#26)
#28Dilip Kumar
dilipbalaut@gmail.com
In reply to: Michael Paquier (#27)