[HACKERS][Proposal] LZ4 Compressed Storage Manager

Started by Николай Петровalmost 7 years ago5 messages
#1Николай Петров
nik.petrov.ua@yandex.ru

Hello everyone!
Thank you for your interest to this topic.

I would like to propose Compressed Storage Manager for PostgreSQL.

The problem:
In cases when you store some log-like data in your tables, or when you
store time-series data you may face with high disk space consumption
because of a lot of data. It is a good idea to compress tables,
especially if you have a compressible data and OLAP
WORM (write once read many) usage scenarios.

Current ways to solve this problem:
Now this could be solved via a compressible file system such as BTRFS
or ZFS. This approach has a contradictory impact on performance and
connected with difficulties of administration.

Other's DB approaches:
Postgres Pro Enterprise has embedded CFS [1]https://postgrespro.com/docs/enterprise/9.6/cfs[2]https://afiskon.github.io/static/2017/postgresql-in-core-compression-pgconf2017.pdf (page 17) for this purposes.
MySQL InnoDB has two options of compression - table level compression
(zlib only) [3]https://dev.mysql.com/doc/refman/8.0/en/innodb-table-compression.html and transparency pages compression (zlib, LZ4) [4]https://dev.mysql.com/doc/refman/8.0/en/innodb-page-compression.html
via hole punching [5]https://lwn.net/Articles/415889/.

My offer:
Implement LZ4 Compressed Storage Manager. It should compress pages on
writing to block files and decompress on reading. I would like to
offer LZ4 at first, because it has low CPU consumption and it is
available under BSD 2 clause license.

Compressed Storage Manager operation description (TLDR: algorithm could
be similar to MySQL table level compression):
- It should store compressed pages in a block file, but because of
different size of compressed data, it should have an additional
file with offset for each pages.
- When it reads a page, it translates upper PostgreSQL layers
file/offset query to actual page offset, read compressed page
bytes, decompress them and fill the requested buffer with
decompressed page.
- New pages writing quite a simple, it has to compress the page,
write it to block file and write page offset into a file with
pointers.
- In cases when it's necessary to write changed page, it has to
check that the size of the compressed page smaller or equal to
previous version. If it's bigger, it is should to write page
to the end of the block file and change the page pointer. The
old page version became dead.
- There is an ability to make free space release mechanism, for instance,
MySQL use hole punching (what contradictory impact on
performance [6]https://www.percona.com/blog/2017/11/20/innodb-page-compression/). At first time dead pages could be freed
via VACUUM FULL.

    pointers file
  +====+====+====+
  | p1 | p2 | p3 |  
  +=|==+==|=+==|=+  
    |     |    |_________________________________
    |     |____________________                  |
    |                          |                 |         block file
  +=|======+=================+=|===============+=|==================+
  | p1 len | p1 ####data#### | p2 len | p2 #d# | p3 len | p3 #data# |
  +========+=================+=================+====================+

Test of possible compression (database [7]https://postgrespro.com/education/demodb, table ticket_flights [8]https://postgrespro.com/docs/postgrespro/10/apjs02):
547M 47087 <- uncompressed
200M 47087.lz4.1.pages.compressed <-- pages compression (37%)

Pros:
- decreases disk space usage
- decreases disk reads
Cons:
- possible increases random access I/O
- increases CPU usage
- possible conflicts with PostgreSQL expectations
of Storage Manager behaviour
- could conflict with pg_basebackup and pg_upgrade utilities
- compression requires additional memory

Why it should be implemented on Storage Manager level instead of usage
Pluggable storage API [9]https://commitfest.postgresql.org/22/1283/?
- From my perspective view Storage Manager level implementation
allows to focus on proper I/O operations and compression.
It allows to write much more simple realization. It's because of
Pluggable storage API force you to implement more complex
interfaces. To be honest, I am really hesitating about this point,
especially because of Pluggable storage API allows to create
extension without core code modification and it potentially allows
to use more perfective compression algorithms (Table Access Manager
allows you to get more information about storing data).

I would like to implement a proof of concept
and have a couple of questions:
- your opinion about necessity of this feature
(Compressed Storage Manager)
- Is it good idea to implement DB compression on Storage Manager
level? Perhaps it is better to use Pluggable storage API.
- Is there any reason to refuse this proposal?
- Are there any circumstances what didn't allow to implement
Compressed Storage Manager?

Regards,
Nikolay P.

[1]: https://postgrespro.com/docs/enterprise/9.6/cfs
[2]: https://afiskon.github.io/static/2017/postgresql-in-core-compression-pgconf2017.pdf (page 17)
[3]: https://dev.mysql.com/doc/refman/8.0/en/innodb-table-compression.html
[4]: https://dev.mysql.com/doc/refman/8.0/en/innodb-page-compression.html
[5]: https://lwn.net/Articles/415889/
[6]: https://www.percona.com/blog/2017/11/20/innodb-page-compression/
[7]: https://postgrespro.com/education/demodb
[8]: https://postgrespro.com/docs/postgrespro/10/apjs02
[9]: https://commitfest.postgresql.org/22/1283/

#2Nikolay Petrov
nik.petrov.ua@yandex.ru
In reply to: Николай Петров (#1)
Re: [HACKERS][Proposal] LZ4 Compressed Storage Manager

31.03.2019, 17:26, "Nikolay Petrov" <nik.petrov.ua@yandex.ru>:

Hello everyone!
Thank you for your interest to this topic.

I would like to propose Compressed Storage Manager for PostgreSQL.

Previous thread here
/messages/by-id/op.ux8if71gcigqcu@soyouz
And the result of previous investigation
/messages/by-id/op.uyhszpgkcke6l8@soyouz

Regards,
Nikolay P.

#3Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Николай Петров (#1)
Re: [HACKERS][Proposal] LZ4 Compressed Storage Manager

On 31.03.2019 17:25, Николай Петров wrote:

Hello everyone!
Thank you for your interest to this topic.

I would like to propose Compressed Storage Manager for PostgreSQL.

The problem:
In cases when you store some log-like data in your tables, or when you
store time-series data you may face with high disk space consumption
because of a lot of data. It is a good idea to compress tables,
especially if you have a compressible data and OLAP
WORM (write once read many) usage scenarios.

Current ways to solve this problem:
Now this could be solved via a compressible file system such as BTRFS
or ZFS. This approach has a contradictory impact on performance and
connected with difficulties of administration.

Other's DB approaches:
Postgres Pro Enterprise has embedded CFS [1][2] for this purposes.
MySQL InnoDB has two options of compression - table level compression
(zlib only) [3] and transparency pages compression (zlib, LZ4) [4]
via hole punching [5].

My offer:
Implement LZ4 Compressed Storage Manager. It should compress pages on
writing to block files and decompress on reading. I would like to
offer LZ4 at first, because it has low CPU consumption and it is
available under BSD 2 clause license.

Compressed Storage Manager operation description (TLDR: algorithm could
be similar to MySQL table level compression):
- It should store compressed pages in a block file, but because of
different size of compressed data, it should have an additional
file with offset for each pages.
- When it reads a page, it translates upper PostgreSQL layers
file/offset query to actual page offset, read compressed page
bytes, decompress them and fill the requested buffer with
decompressed page.
- New pages writing quite a simple, it has to compress the page,
write it to block file and write page offset into a file with
pointers.
- In cases when it's necessary to write changed page, it has to
check that the size of the compressed page smaller or equal to
previous version. If it's bigger, it is should to write page
to the end of the block file and change the page pointer. The
old page version became dead.
- There is an ability to make free space release mechanism, for instance,
MySQL use hole punching (what contradictory impact on
performance [6]). At first time dead pages could be freed
via VACUUM FULL.

pointers file
+====+====+====+
| p1 | p2 | p3 |
+=|==+==|=+==|=+
|     |    |_________________________________
|     |____________________                  |
|                          |                 |         block file
+=|======+=================+=|===============+=|==================+
| p1 len | p1 ####data#### | p2 len | p2 #d# | p3 len | p3 #data# |
+========+=================+=================+====================+

Test of possible compression (database [7], table ticket_flights [8]):
547M 47087 <- uncompressed
200M 47087.lz4.1.pages.compressed <-- pages compression (37%)

Pros:
- decreases disk space usage
- decreases disk reads
Cons:
- possible increases random access I/O
- increases CPU usage
- possible conflicts with PostgreSQL expectations
of Storage Manager behaviour
- could conflict with pg_basebackup and pg_upgrade utilities
- compression requires additional memory

Why it should be implemented on Storage Manager level instead of usage
Pluggable storage API [9]?
- From my perspective view Storage Manager level implementation
allows to focus on proper I/O operations and compression.
It allows to write much more simple realization. It's because of
Pluggable storage API force you to implement more complex
interfaces. To be honest, I am really hesitating about this point,
especially because of Pluggable storage API allows to create
extension without core code modification and it potentially allows
to use more perfective compression algorithms (Table Access Manager
allows you to get more information about storing data).

I would like to implement a proof of concept
and have a couple of questions:
- your opinion about necessity of this feature
(Compressed Storage Manager)
- Is it good idea to implement DB compression on Storage Manager
level? Perhaps it is better to use Pluggable storage API.
- Is there any reason to refuse this proposal?
- Are there any circumstances what didn't allow to implement
Compressed Storage Manager?

Regards,
Nikolay P.

[1] - https://postgrespro.com/docs/enterprise/9.6/cfs
[2] - https://afiskon.github.io/static/2017/postgresql-in-core-compression-pgconf2017.pdf (page 17)
[3] - https://dev.mysql.com/doc/refman/8.0/en/innodb-table-compression.html
[4] - https://dev.mysql.com/doc/refman/8.0/en/innodb-page-compression.html
[5] - https://lwn.net/Articles/415889/
[6] - https://www.percona.com/blog/2017/11/20/innodb-page-compression/
[7] - https://postgrespro.com/education/demodb
[8] - https://postgrespro.com/docs/postgrespro/10/apjs02
[9] - https://commitfest.postgresql.org/22/1283/

I can shared my experience of development of CFS for PostgresPro.
First of all I want to notice that most likely it will be not possible
to isolate all changes in Postgres at Storage Manager level.
There are many places in Postgres (basebackup,vacuum,...) which makes
some assumptions on content of Postgres data directory.
So if compressed storage manager will provide some alternative files
layout, then other parts of the Postgres should know about it.

The most difficult thing in CFS development is certainly
defragmentation. In CFS it is done using background garbage collection,
by one or one
GC worker processes. The main challenges were to minimize its
interaction with normal work of the system, make it fault tolerant and
prevent unlimited growth of data segments.

CFS is not introducing its own storage manager, it is mostly embedded in
existed Postgres file access layer (fd.c, md.c). It allows to reused
code responsible for mapping relations and file descriptors cache. As it
was recently discussed in hackers, it may be good idea to separate the
questions "how to map blocks to filenames and offsets" and "how to
actually perform IO". In this it will be easier to implement compressed
storage manager.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#4Bruce Momjian
bruce@momjian.us
In reply to: Николай Петров (#1)
Re: [HACKERS][Proposal] LZ4 Compressed Storage Manager

On Sun, Mar 31, 2019 at 05:25:51PM +0300, Николай Петров wrote:

Why it should be implemented on Storage Manager level instead of usage
Pluggable storage API [9]?
- From my perspective view Storage Manager level implementation
allows to focus on proper I/O operations and compression.
It allows to write much more simple realization. It's because of
Pluggable storage API force you to implement more complex
interfaces. To be honest, I am really hesitating about this point,
especially because of Pluggable storage API allows to create
extension without core code modification and it potentially allows
to use more perfective compression algorithms (Table Access Manager
allows you to get more information about storing data).

I would like to implement a proof of concept
and have a couple of questions:
- your opinion about necessity of this feature
(Compressed Storage Manager)
- Is it good idea to implement DB compression on Storage Manager
level? Perhaps it is better to use Pluggable storage API.
- Is there any reason to refuse this proposal?
- Are there any circumstances what didn't allow to implement
Compressed Storage Manager?

Stepping back a bit, there are several levels of compression:

1. single field
2. across all fields in a row
3. across rows on a single page
4. across all rows in a table
5. across tables in a database

We currently do #1 with TOAST, and your approach would allow the first
three. #4 feels like it is getting near the features of columnar
storage. I think it is unclear if adding #2 and #3 produce enough of a
benefit to warrant special storage, given the complexity and overhead of
implementing it.

I do think the Pluggable storage API is the right approach, and, if you
are going to go that route, adding #4 compression seems very worthwhile.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +
#5Pavel Stehule
pavel.stehule@gmail.com
In reply to: Bruce Momjian (#4)
Re: [HACKERS][Proposal] LZ4 Compressed Storage Manager

čt 11. 4. 2019 v 18:18 odesílatel Bruce Momjian <bruce@momjian.us> napsal:

On Sun, Mar 31, 2019 at 05:25:51PM +0300, Николай Петров wrote:

Why it should be implemented on Storage Manager level instead of usage
Pluggable storage API [9]?
- From my perspective view Storage Manager level implementation
allows to focus on proper I/O operations and compression.
It allows to write much more simple realization. It's because of
Pluggable storage API force you to implement more complex
interfaces. To be honest, I am really hesitating about this point,
especially because of Pluggable storage API allows to create
extension without core code modification and it potentially allows
to use more perfective compression algorithms (Table Access Manager
allows you to get more information about storing data).

I would like to implement a proof of concept
and have a couple of questions:
- your opinion about necessity of this feature
(Compressed Storage Manager)
- Is it good idea to implement DB compression on Storage Manager
level? Perhaps it is better to use Pluggable storage API.
- Is there any reason to refuse this proposal?
- Are there any circumstances what didn't allow to implement
Compressed Storage Manager?

Stepping back a bit, there are several levels of compression:

1. single field
2. across all fields in a row
3. across rows on a single page
4. across all rows in a table
5. across tables in a database

We currently do #1 with TOAST, and your approach would allow the first
three. #4 feels like it is getting near the features of columnar
storage. I think it is unclear if adding #2 and #3 produce enough of a
benefit to warrant special storage, given the complexity and overhead of
implementing it.

@4 compression over columns on page are probably much more effective. But
there can some preprocessing stage, where rows can be transformed to
columns.

This doesn't need real column store, and can helps lot of. Real column
store has sense when columns are separated to different pages. But for
compressions, we can transform rows to columns without real column storage.

Probably 8kB page is too small for this case.

Regards

Pavel

Show quoted text

I do think the Pluggable storage API is the right approach, and, if you
are going to go that route, adding #4 compression seems very worthwhile.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +