Compression In Postgresql 9.6

Started by Shital Aover 6 years ago8 messagesgeneral
Jump to latest
#1Shital A
brightuser2019@gmail.com

Hello,

Need inputs on below:

We are working on a setting up a new highly transactional (tps 100k) OLTP
system for payments using blockchain and postgresql 9.6 as DB on Rhel 7.6.
Postgres version is 9.6 and not latest because of specs of blockchain
component.

There is a requirement for data compression on DB level. Please provide
your inputs on how this can be best achieved.

Checked in-build Toast, it compressed the data provided exceed the 2kb
pagesize? If the data values are small and even if there are billion
records they wont be compressed, this is what I understood.

Are there any suggestions of compressing older data irrespective of row
size transparently?

Thanks.

#2Ron
ronljohnsonjr@gmail.com
In reply to: Shital A (#1)
Re: Compression In Postgresql 9.6

On 8/5/19 1:30 AM, Shital A wrote:

Hello,

Need inputs on below:

We are working on a setting up a new highly transactional (tps 100k) OLTP
system for payments using blockchain and postgresql 9.6 as DB on Rhel 7.6.
Postgres version is 9.6 and not latest because of specs of blockchain
component.

There is a requirement for data compression on DB level. Please provide
your inputs on how this can be best achieved.

Column-level compression?  Because some data just doesn't compress well.

Checked in-build Toast, it compressed the data provided exceed the 2kb
pagesize? If the data values are small and even if there are billion
records they wont be compressed, this is what I understood.

Are there any suggestions of compressing older data irrespective of row
size transparently?

Are your tables partitioned?

--
Angular momentum makes the world go 'round.

#3Shital A
brightuser2019@gmail.com
In reply to: Ron (#2)
Re: Compression In Postgresql 9.6

On Mon, 5 Aug 2019, 12:42 Ron, <ronljohnsonjr@gmail.com> wrote:

On 8/5/19 1:30 AM, Shital A wrote:

Hello,

Need inputs on below:

We are working on a setting up a new highly transactional (tps 100k)

OLTP

system for payments using blockchain and postgresql 9.6 as DB on Rhel

7.6.

Postgres version is 9.6 and not latest because of specs of blockchain
component.

There is a requirement for data compression on DB level. Please provide
your inputs on how this can be best achieved.

Column-level compression? Because some data just doesn't compress well.

Checked in-build Toast, it compressed the data provided exceed the 2kb
pagesize? If the data values are small and even if there are billion
records they wont be compressed, this is what I understood.

Are there any suggestions of compressing older data irrespective of row
size transparently?

Are your tables partitioned?

--
Angular momentum makes the world go 'round.

Thanks for your reply, Ron.
Any reference link that can help to understand column level compression in
Postgrace?

Tables are not partitioned.

Thanks!

Show quoted text
#4Imre Samu
pella.samu@gmail.com
In reply to: Shital A (#1)
Re: Compression In Postgresql 9.6

because of specs of blockchain component.

Based on this schema (
https://grisha.org/blog/2017/12/15/blockchain-and-postgres/ AND
https://github.com/blkchain/pg_blkchain )
and
IF (your) blockchain component is using BYTEA everywhere ( binary data
type : https://www.postgresql.org/docs/9.6/datatype-binary.html )
THEN ( imho) you can't expect lot of space saving.

Rhel 7.6. . ... There is a requirement for data compression

On RedHat (>=7.5) you can test the new VDO compression layer
https://www.redhat.com/en/blog/look-vdo-new-linux-compression-layer
it is mentioned few weeks ago: "*... VDO compression for tables that are
less update intensive. "*
/messages/by-id/20190718173424.GB25488@aart.rice.edu

on Postgres level - you can enable the "wal_compression"
- https://www.postgresql.org/docs/9.6/runtime-config-wal.html
-
https://www.endpoint.com/blog/2017/03/28/postgres-wal-files-best-compression

regards,
Imre

Shital A <brightuser2019@gmail.com> ezt írta (időpont: 2019. aug. 5., H,
8:30):

Show quoted text

Hello,

Need inputs on below:

We are working on a setting up a new highly transactional (tps 100k) OLTP
system for payments using blockchain and postgresql 9.6 as DB on Rhel 7.6.
Postgres version is 9.6 and not latest because of specs of blockchain
component.

There is a requirement for data compression on DB level. Please provide
your inputs on how this can be best achieved.

Checked in-build Toast, it compressed the data provided exceed the 2kb
pagesize? If the data values are small and even if there are billion
records they wont be compressed, this is what I understood.

Are there any suggestions of compressing older data irrespective of row
size transparently?

Thanks.

#5Kenneth Marshall
ktm@rice.edu
In reply to: Shital A (#1)
Re: Compression In Postgresql 9.6

On Mon, Aug 05, 2019 at 12:00:14PM +0530, Shital A wrote:

Hello,

Need inputs on below:

We are working on a setting up a new highly transactional (tps 100k) OLTP
system for payments using blockchain and postgresql 9.6 as DB on Rhel 7.6.
Postgres version is 9.6 and not latest because of specs of blockchain
component.

There is a requirement for data compression on DB level. Please provide
your inputs on how this can be best achieved.

Checked in-build Toast, it compressed the data provided exceed the 2kb
pagesize? If the data values are small and even if there are billion
records they wont be compressed, this is what I understood.

Are there any suggestions of compressing older data irrespective of row
size transparently?

Thanks.

Hi,

On RHEL/Centos you can use VDO filesystem compression to make an archive
tablespace to use for older data. That will compress everything.

Regards,
Ken

#6Ron
ronljohnsonjr@gmail.com
In reply to: Kenneth Marshall (#5)
Re: Compression In Postgresql 9.6

On 8/5/19 7:31 AM, Kenneth Marshall wrote:

On Mon, Aug 05, 2019 at 12:00:14PM +0530, Shital A wrote:

Hello,

Need inputs on below:

We are working on a setting up a new highly transactional (tps 100k) OLTP
system for payments using blockchain and postgresql 9.6 as DB on Rhel 7.6.
Postgres version is 9.6 and not latest because of specs of blockchain
component.

There is a requirement for data compression on DB level. Please provide
your inputs on how this can be best achieved.

Checked in-build Toast, it compressed the data provided exceed the 2kb
pagesize? If the data values are small and even if there are billion
records they wont be compressed, this is what I understood.

Are there any suggestions of compressing older data irrespective of row
size transparently?

Thanks.

Hi,

On RHEL/Centos you can use VDO filesystem compression to make an archive
tablespace to use for older data. That will compress everything.

Doesn't this imply that either his table is partitioned or he regularly
moves records from the main table to the archive table?

--
Angular momentum makes the world go 'round.

#7Kenneth Marshall
ktm@rice.edu
In reply to: Ron (#6)
Re: Compression In Postgresql 9.6

Hi,

On RHEL/Centos you can use VDO filesystem compression to make an archive
tablespace to use for older data. That will compress everything.

Doesn't this imply that either his table is partitioned or he
regularly moves records from the main table to the archive table?

Hi,

Yes, he will need to do something to meet his goal of both a 100k TPS
and have older archives online. He could also use something like
postgres_fdw to have the archives on a seperate server completely.

Regards,
Ken

#8Shital A
brightuser2019@gmail.com
In reply to: Kenneth Marshall (#7)
Re: Compression In Postgresql 9.6

On Mon, 5 Aug 2019, 18:57 Kenneth Marshall, <ktm@rice.edu> wrote:

Hi,

On RHEL/Centos you can use VDO filesystem compression to make an archive
tablespace to use for older data. That will compress everything.

Doesn't this imply that either his table is partitioned or he
regularly moves records from the main table to the archive table?

Hi,

Yes, he will need to do something to meet his goal of both a 100k TPS
and have older archives online. He could also use something like
postgres_fdw to have the archives on a seperate server completely.

Regards,
Ken

Thanks for the suggestions guys !

After checking i am thinking about following approach:

1. Create a FS on a separate drive on the server with VDO

2. Create a tablespace on FS created above for storing the historical/less
update intensive data

3. Other tablespaces remain on non compressed FS

4. Use table partitioning and create the tables in tablespace created in
step 2.

- will this complicate the DB design? In terms of replication, backup and
restores
- Can this give optimum performance.

Let me know your views !

Thank You !