BUG #15609: synchronous_commit=off insert performance regression with secondary indexes
The following bug has been logged on the website:
Bug reference: 15609
Logged by: Jean Paolo Saul
Email address: paolo.saul@verizonconnect.com
PostgreSQL version: 11.1
Operating system: CentOS Linux release 7.6.1810 (Core)
Description:
Summary:
We are considering upgrading to PG11 and during performance testing we
have found that
PostgreSQL 11 inserts are around 25% slower than inserts in PostgreSQL
9.5.
Tools Used:
pgbench (11.1)
Test Overview:
1) InitDB and start four instances using versions PG9.5.15, PG9.6.11
PG10.6, and PG11.1
2) Create a test table
3) pgbench using inserts to the test table
3.1) test using default config settings , synchronous_commit=off ,
fsync=off
3.1.1) test with primary key only , primary key with one secondary
index , primary key with two secondary indexes , primary key with three
secondary indexes
Test Setup:
Amazon EC2 Instance:
m4.16xlarge - 64 cores, 251GB RAM
50GB EBS, volume type: io1
Table:
CREATE TABLE test_indexes (test_indexes_id BIGSERIAL, bool_data BOOLEAN,
int_data INTEGER, text_data TEXT, PRIMARY KEY (test_indexes_id))
Indexes:
CREATE INDEX bool_idx ON test_indexes (bool_data)
CREATE INDEX int_idx ON test_indexes (int_data)
CREATE INDEX text_idx ON test_indexes (text_data)
Test Results (TPS is average of three runs):
** DEFAULT CONF VERSION TPS DIFF FROM PG95
pkey only
PG9.5 42414 0.0%
PG9.6 41967 -1.1%
PG10 43443 2.4%
PG11 43676 3.0%
bool index
PG9.5 42310 0.0%
PG9.6 42082 -0.5%
PG10 41902 -1.0%
PG11 42305 0.0%
bool+int index
PG9.5 41539 0.0%
PG9.6 41966 1.0%
PG10 41294 -0.6%
PG11 41819 0.7%
bool+int+text index
PG9.5 40000 0.0%
PG9.6 40526 1.3%
PG10 40582 1.5%
PG11 39882 -0.3%
** SYNCHRONOUS_COMMIT=OFF VERSION TPS DIFF FROM PG95
pkey only
PG9.5 103904 0.0%
PG9.6 100017 -3.7%
PG10 103857 0.0%
PG11 117147 12.7%
bool index
PG9.5 67283 0.0%
PG9.6 70850 5.3%
PG10 51113 -24.0%
PG11 49659 -26.2%
bool+int index
PG9.5 66048 0.0%
PG9.6 68247 3.3%
PG10 50558 -23.5%
PG11 47734 -27.7%
bool+int+text index
PG9.5 66732 0.0%
PG9.6 67131 0.6%
PG10 47157 -29.3%
PG11 47692 -28.5%
** FSYNC=OFF (10 SECS) VERSION TPS DIFF FROM PG95
no secondary index
PG9.5 90974 0.0%
PG9.6 90174 -0.9%
PG10 93661 3.0%
PG11 101758 11.9%
bool index
PG9.5 65328 0.0%
PG9.6 68447 4.8%
PG10 45757 -30.0%
PG11 46610 -28.7%
bool+int index
PG9.5 63247 0.0%
PG9.6 64010 1.2%
PG10 43378 -31.4%
PG11 45467 -28.1%
bool+int+text index
PG9.5 60768 0.0%
PG9.6 63230 4.1%
PG10 40968 -32.6%
PG11 44017 -27.6%
Questions:
Is there an extra setting for Postgres 10+ required to "recover" the
performance loss from PG9.5?
We are using PG9.5 with synchronous_commit=off in production and
majority of our tables have secondary indexes.
Why is PG10+ slower by default when synchronous_commit is off?
Notes:
Tested with all wal_sync_methods: fdatasync, open_datasync, fsync,
fsync_writethrough(fails), open_sync, with no statistical significance
found
Did not test with updates or deletes
"PG" == PG Bug reporting form <noreply@postgresql.org> writes:
PG> Questions:
PG> Is there an extra setting for Postgres 10+ required to "recover"
PG> the performance loss from PG9.5?
The default wal_level changed between pg 9.6 and pg10, does reverting
that change make any difference? (this is just a guess)
PG> Why is PG10+ slower by default when synchronous_commit is off?
synchronous_commit is probably only relevant to the extent that turning
it off causes the test not to be bottlenecked on WAL flush calls.
--
Andrew (irc:RhodiumToad)
On Tue, Jan 29, 2019 at 12:46 AM PG Bug reporting form <
noreply@postgresql.org> wrote:
The following bug has been logged on the website:
Bug reference: 15609
Logged by: Jean Paolo Saul
Email address: paolo.saul@verizonconnect.com
PostgreSQL version: 11.1
Operating system: CentOS Linux release 7.6.1810 (Core)
Description:Summary:
We are considering upgrading to PG11 and during performance testing we
have found that
PostgreSQL 11 inserts are around 25% slower than inserts in PostgreSQL
9.5.Tools Used:
pgbench (11.1)
Can you show the actual pgbench command line used, and the contents of the
file specified by -f ?
Cheers,
Jeff
Hi Jeff,
Insert SQL:
INSERT INTO test_indexes (bool_data , int_data , text_data ) VALUES (
(RANDOM() * 10)::INT % 2 = 0, RANDOM() * 10000, MD5((RANDOM() *
1000)::TEXT) );
pg_bench:
/usr/pgsql-11/bin/pgbench -Upostgres -f ${SQL} -n -c 60 -j 60 -T120 -p
${PORT}
Before each test run, I drop and recreate the table and indexes.
Cheers,
Paolo
On Wed, 30 Jan 2019 at 07:37, Jeff Janes <jeff.janes@gmail.com> wrote:
On Tue, Jan 29, 2019 at 12:46 AM PG Bug reporting form <
noreply@postgresql.org> wrote:The following bug has been logged on the website:
Bug reference: 15609
Logged by: Jean Paolo Saul
Email address: paolo.saul@verizonconnect.com
PostgreSQL version: 11.1
Operating system: CentOS Linux release 7.6.1810 (Core)
Description:Summary:
We are considering upgrading to PG11 and during performance testing we
have found that
PostgreSQL 11 inserts are around 25% slower than inserts in PostgreSQL
9.5.Tools Used:
pgbench (11.1)Can you show the actual pgbench command line used, and the contents of the
file specified by -f ?Cheers,
Jeff
--------------------------------------------------------------------------------------------------------------------------------------------
PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL
VERIZON CONNECT CONFIDENTIALITY NOTICE
This message is intended for the addressee only and may contain confidential and/or privileged material. Any review, re-transmission, dissemination, reliance upon or other use of this information by persons or entities other than the addressee is prohibited. If you receive this in error, please contact the sender and delete this message. Thank you.
Hi Andrew,
1) changing wal_levels did not make any difference w/
synchronous_commit=off.
This pattern also applies to fsync=off
2) We suspect the same with regards to the flushing bottleneck being
removed.
But that seems to imply that there was a change deeper in the code that
causes this regression on PG10+.
IMHO I cannot be the only one that noticed this, since PG10 has been out
for a while and secondary indexes are quite common in practice.
I was wondering if anyone can point me in the right direction on how to
further investigate this?
Cheers,
Paolo
Test results below.
---------------------------
WAL_LEVEL = {DEFAULTS}
--(MINIMAL ON 9.5,9.6 , REPLICA ON 10,11)
SYNCHRONOUS_COMMIT=OFF VERSION TPS DIFF FROM PG95
PG9.5 104503 0.0%
PG9.6 98842 -5.4%
PG10 103924 -0.6%
PG11 117635 12.6%
PG9.5 67285 0.0%
PG9.6 70153 4.3%
PG10 53657 -20.3%
PG11 49952 -25.8%
PG9.5 67695 0.0%
PG9.6 68592 1.3%
PG10 51039 -24.6%
PG11 48630 -28.2%
PG9.5 66102 0.0%
PG9.6 67883 2.7%
PG10 48964 -25.9%
PG11 46215 -30.1%
WAL_LEVEL = MINIMAL
SYNCHRONOUS_COMMIT=OFF VERSION TPS DIFF FROM PG95
PG9.5 103785 0.0%
PG9.6 98303 -5.3%
PG10 103369 -0.4%
PG11 116446 12.2%
PG9.5 67877 0.0%
PG9.6 70841 4.4%
PG10 52885 -22.1%
PG11 50111 -26.2%
PG9.5 67754 0.0%
PG9.6 69373 2.4%
PG10 52646 -22.3%
PG11 48824 -27.9%
PG9.5 66197 0.0%
PG9.6 69217 4.6%
PG10 50518 -23.7%
PG11 47389 -28.4%
WAL_LEVEL = HOT_STANDBY; (9.5, 9.6)
WAL_LEVEL = REPLICA; (10, 11)
SYNCHRONOUS_COMMIT=OFF VERSION TPS DIFF FROM PG95
PG9.5 104718 0.0%
PG9.6 97279 -7.1%
PG10 104249 -0.4%
PG11 116119 10.9%
PG9.5 68819 0.0%
PG9.6 71223 3.5%
PG10 52592 -23.6%
PG11 50047 -27.3%
PG9.5 67057 0.0%
PG9.6 69256 3.3%
PG10 51317 -23.5%
PG11 48401 -27.8%
PG9.5 66727 0.0%
PG9.6 67591 1.3%
PG10 49819 -25.3%
PG11 47453 -28.9%
WAL_LEVEL = LOGICAL
SYNCHRONOUS_COMMIT=OFF VERSION TPS DIFF FROM PG95
PG9.5 104208 0.0%
PG9.6 97920 -6.0%
PG10 104084 -0.1%
PG11 115364 10.7%
PG9.5 66910 0.0%
PG9.6 70968 6.1%
PG10 52719 -21.2%
PG11 48882 -26.9%
PG9.5 67704 0.0%
PG9.6 69768 3.0%
PG10 50080 -26.0%
PG11 49294 -27.2%
PG9.5 67490 0.0%
PG9.6 68872 2.0%
PG10 45837 -32.1%
PG11 46505 -31.1%
---------------------------
On Tue, 29 Jan 2019 at 20:30, Andrew Gierth <andrew@tao11.riddles.org.uk>
wrote:
"PG" == PG Bug reporting form <noreply@postgresql.org> writes:
PG> Questions:
PG> Is there an extra setting for Postgres 10+ required to "recover"
PG> the performance loss from PG9.5?The default wal_level changed between pg 9.6 and pg10, does reverting
that change make any difference? (this is just a guess)PG> Why is PG10+ slower by default when synchronous_commit is off?
synchronous_commit is probably only relevant to the extent that turning
it off causes the test not to be bottlenecked on WAL flush calls.--
Andrew (irc:RhodiumToad)
--------------------------------------------------------------------------------------------------------------------------------------------
PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL
VERIZON CONNECT CONFIDENTIALITY NOTICE
This message is intended for the addressee only and may contain confidential and/or privileged material. Any review, re-transmission, dissemination, reliance upon or other use of this information by persons or entities other than the addressee is prohibited. If you receive this in error, please contact the sender and delete this message. Thank you.
On Tue, Jan 29, 2019 at 2:05 PM Saul, Jean Paolo
<paolo.saul@verizonconnect.com> wrote:
Before each test run, I drop and recreate the table and indexes.
What happens if you don't create bool_idx, or replace it with another
index on some other column? I notice that you didn't show any case
that doesn't have this index, except for the PK-only case, which is
actually faster. I surmise that that's the common factor in all of the
test cases where you have observed a regression. It would be nice to
confirm or disprove this theory.
The nbtree code is known to deal poorly with low cardinality indexes
[1]: /messages/by-id/CAH2-Wzmf0fvVhU+SSZpGW4Qe9t--j_DmXdX3it5JcdB8FF2EsA@mail.gmail.com -- Peter Geoghegan
installations that are on the same hardware and operating system?
[1]: /messages/by-id/CAH2-Wzmf0fvVhU+SSZpGW4Qe9t--j_DmXdX3it5JcdB8FF2EsA@mail.gmail.com -- Peter Geoghegan
--
Peter Geoghegan
Hi Peter,
Thanks for the tip! I did not think of that.
That seems to have narrowed down the regression to Boolean data types only.
I will run the test now against the most common base types that we use and
report back.
Is this a known regression/bug on PG10+? Sorry I could not find any
reference to this bug.
Cheers,
Paolo
p.s. Yes I am comparing PG installations on the same hardware and OS.
Test results below.
------------------------------
DEFAULT CONF VERSION TPS DIFF FROM PG95
PKEY ONLY
PG9.5 44633 0.0%
PG9.6 44947 0.7%
PG10 45069 1.0%
PG11 44868 0.5%
+ BOOL INDEX ONLY
PG9.5 43086 0.0%
PG9.6 43275 0.4%
PG10 43741 1.5%
PG11 43638 1.3%
+ INT INDEX ONLY
PG9.5 43169 0.0%
PG9.6 42306 -2.0%
PG10 43525 0.8%
PG11 44078 2.1%
+ TEXT INDEX ONLY
PG9.5 41918 0.0%
PG9.6 42117 0.5%
PG10 42339 1.0%
PG11 42680 1.8%
SYNCHRONOUS_COMMIT=OFF VERSION TPS DIFF FROM PG95
PKEY ONLY
PG9.5 104048 0.0%
PG9.6 99267 -4.6%
PG10 104050 0.0%
PG11 116392 11.9%
+ BOOL INDEX ONLY
PG9.5 68366 0.0%
PG9.6 71196 4.1%
PG10 53265 -22.1%
PG11 52031 -23.9%
+ INT INDEX ONLY
PG9.5 102265 0.0%
PG9.6 96054 -6.1%
PG10 101051 -1.2%
PG11 113278 10.8%
+ TEXT INDEX ONLY
PG9.5 103689 0.0%
PG9.6 95384 -8.0%
PG10 101014 -2.6%
PG11 112658 8.7%
FSYNC=OFF VERSION TPS DIFF FROM PG95
PKEY ONLY
PG9.5 91968 0.0%
PG9.6 88558 -3.7%
PG10 94235 2.5%
PG11 100683 9.5%
+ BOOL INDEX ONLY
PG9.5 64236 0.0%
PG9.6 67519 5.1%
PG10 47473 -26.1%
PG11 46812 -27.1%
+ INT INDEX ONLY
PG9.5 89027 0.0%
PG9.6 85790 -3.6%
PG10 91315 2.6%
PG11 96348 8.2%
+ TEXT INDEX ONLY
PG9.5 87348 0.0%
PG9.6 84636 -3.1%
PG10 89983 3.0%
PG11 95333 9.1%
------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------
PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL
VERIZON CONNECT CONFIDENTIALITY NOTICE
This message is intended for the addressee only and may contain confidential and/or privileged material. Any review, re-transmission, dissemination, reliance upon or other use of this information by persons or entities other than the addressee is prohibited. If you receive this in error, please contact the sender and delete this message. Thank you.
Hi,
On Tue, Jan 29, 2019 at 8:27 PM Saul, Jean Paolo
<paolo.saul@verizonconnect.com> wrote:
That seems to have narrowed down the regression to Boolean data types only.
I will run the test now against the most common base types that we use and report back.Is this a known regression/bug on PG10+? Sorry I could not find any reference to this bug.
I can't think of a reason why the problem would be any worse on recent
releases right now. However, the behavior I'm describing is
complicated. I could have missed something.
--
Peter Geoghegan
On Tue, Jan 29, 2019 at 07:30:09AM +0000, Andrew Gierth wrote:
The default wal_level changed between pg 9.6 and pg10, does reverting
that change make any difference? (this is just a guess)
It seems to me that Andrew has the good conclusion here. The OP is
mentioning that a couple of data folders are just initdb'd, which
could point out to the fact that the default configuration set is used
for all of them.
--
Michael
On Tue, Jan 29, 2019 at 11:32 PM Saul, Jean Paolo
<paolo.saul@verizonconnect.com> wrote:
But that seems to imply that there was a change deeper in the code that causes this regression on PG10+.
IMHO I cannot be the only one that noticed this, since PG10 has been out for a while and secondary indexes are quite common in practice.
I was wondering if anyone can point me in the right direction on how to further investigate this?
What size is bool_idx, and the other indexes once your benchmarks
finish? How do they compare across versions?
--
Peter Geoghegan
Hi Peter,
After each run, I did a:
SELECT pg_total_relation_size( {index name} ) / COUNT(1)::FLOAT FROM
test_indexes
and the standard deviation between PG versions is < 1%.
Boolean and Int have about ~28-29 bytes per row.
Text has about ~77-78 bytes per row.
So not much change between PG versions.
I am testing your comment about low cardinality indexes, and changed my
inserted values.
for integers: (RANDOM()*10)::INT % 2
for text: MD5(((RANDOM()*10)::INT % 2)::TEXT)
The latest results show that text and integer indexes now behave poorly
like the boolean index.
The performance hit is visibly disappointing compared to versions prior to
PG10.
Are there any workarounds to this, as far as you can see?
Cheers,
Paolo
-----------------------------------------------------
INT_DATA = (RANDOM()*10)::INT % 2
synchronous_commit=off
-----------------------------------------------------
version TPS diff from pg95
PKEY
PG9.5 102899 0.0%
PG9.6 97983 -4.8%
PG10 104842 1.9%
PG11 115594 12.3%
BOOL INDEX
PG9.5 67284 0.0%
PG9.6 69950 4.0%
PG10 52404 -22.1%
PG11 49837 -25.9%
INT INDEX *
PG9.5 69014 0.0%
PG9.6 71588 3.7%
PG10 50918 -26.2%
PG11 49780 -27.9%
TEXT INDEX
PG9.5 102695 0.0%
PG9.6 95124 -7.4%
PG10 101953 -0.7%
PG11 113096 10.1%
-----------------------------------------------------
TEXT_DATA = MD5(((RANDOM()*10)::INT % 2)::TEXT)
synchronous_commit=off
-----------------------------------------------------
version TPS diff from pg95
PKEY
PG9.5 104257 0.0%
PG9.6 98600 -5.4%
PG10 104352 0.1%
PG11 116419 11.7%
BOOL INDEX
PG9.5 67919 0.0%
PG9.6 71416 5.1%
PG10 51486 -24.2%
PG11 50160 -26.1%
INT INDEX
PG9.5 102088 0.0%
PG9.6 94483 -7.4%
PG10 100541 -1.5%
PG11 112723 10.4%
TEXT INDEX *
PG9.5 63001 0.0%
PG9.6 63970 1.5%
PG10 45311 -28.1%
PG11 45556 -27.7%
-----------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------
PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL
VERIZON CONNECT CONFIDENTIALITY NOTICE
This message is intended for the addressee only and may contain confidential and/or privileged material. Any review, re-transmission, dissemination, reliance upon or other use of this information by persons or entities other than the addressee is prohibited. If you receive this in error, please contact the sender and delete this message. Thank you.
Additional tests replacing BTREE indexes with HASH indexes indicate that
hash indexes do not suffer from low-cardinality performance regression.
However I cannot use hash indexes in our systems as they are discouraged.
--------------------------------------------------------------------------------------------------------------------------------------------
PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL
VERIZON CONNECT CONFIDENTIALITY NOTICE
This message is intended for the addressee only and may contain confidential and/or privileged material. Any review, re-transmission, dissemination, reliance upon or other use of this information by persons or entities other than the addressee is prohibited. If you receive this in error, please contact the sender and delete this message. Thank you.
On Thu, Jan 31, 2019 at 04:08:39PM +1300, Saul, Jean Paolo wrote:
Additional tests replacing BTREE indexes with HASH indexes indicate that hash
indexes do not suffer from low-cardinality performance regression.
However I cannot use hash indexes in our systems as they are discouraged.
Hash indexes are crash safe since PG 10 so their use is no longer
discouraged, at least from a project perspective.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
Hi Bruce,
Thank you for that info. We will likely migrate our boolean indexes (and
possibly all our low cardinality indexes) to hash after we move to PG11.
How would I know if the PostgreSQL team will be investigating this possible
regression issue?
OR can someone help me how to further identify the root cause in the code?
It would be nice if we can patch this out, or at least find the reasoning
for the slowness.
Does btree somehow have a global lock on a leaf node when it's being
modified?
Sorry for all the questions.
Cheers,
Paolo
On Fri, 1 Feb 2019 at 03:36, Bruce Momjian <bruce@momjian.us> wrote:
On Thu, Jan 31, 2019 at 04:08:39PM +1300, Saul, Jean Paolo wrote:
Additional tests replacing BTREE indexes with HASH indexes indicate that
hash
indexes do not suffer from low-cardinality performance regression.
However I cannot use hash indexes in our systems as they are discouraged.Hash indexes are crash safe since PG 10 so their use is no longer
discouraged, at least from a project perspective.--
Bruce Momjian <bruce@momjian.us>
https://urldefense.proofpoint.com/v2/url?u=http-3A__momjian.us&d=DwIBAg&c=MPZC0Rv-kl4oeK4li6Jc92hnsiCC4tOZO_GSKscRWJE&r=VrrIZRoLr_yZJ3k_V9FTyEmoRy2AmFzy-R0X02P-nWQ&m=Buc9EXWO9Nu-8r264HE6ITfcN7ZkB4kATgNdoUnhNJI&s=1EWjlPEfMXj_eBPXv8wYRZZ4KgnmyOAfA2l7ukFsBu8&e=
EnterpriseDB
https://urldefense.proofpoint.com/v2/url?u=http-3A__enterprisedb.com&d=DwIBAg&c=MPZC0Rv-kl4oeK4li6Jc92hnsiCC4tOZO_GSKscRWJE&r=VrrIZRoLr_yZJ3k_V9FTyEmoRy2AmFzy-R0X02P-nWQ&m=Buc9EXWO9Nu-8r264HE6ITfcN7ZkB4kATgNdoUnhNJI&s=xt6ZYwRpqRSjSWnuKCuOFCeJHP7X2KLV-BN3cB4JLzE&e=+ As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
--------------------------------------------------------------------------------------------------------------------------------------------
PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL
VERIZON CONNECT CONFIDENTIALITY NOTICE
This message is intended for the addressee only and may contain confidential and/or privileged material. Any review, re-transmission, dissemination, reliance upon or other use of this information by persons or entities other than the addressee is prohibited. If you receive this in error, please contact the sender and delete this message. Thank you.
On Wed, Jan 30, 2019 at 6:06 PM Saul, Jean Paolo
<paolo.saul@verizonconnect.com> wrote:
I am testing your comment about low cardinality indexes, and changed my inserted values.
for integers: (RANDOM()*10)::INT % 2
for text: MD5(((RANDOM()*10)::INT % 2)::TEXT)The latest results show that text and integer indexes now behave poorly like the boolean index.
The performance hit is visibly disappointing compared to versions prior to PG10.
FWIW, I cannot recreate this. I still have no reason to believe that
this problem with low cardinality indexes would be any worse on more
recent versions.
BTW, I think that you'll find that hash indexes don't do as well as
B-Tree indexes with lots of duplicates when reading.
--
Peter Geoghegan
Hi Peter,
That is unfortunate (both your results and the hash performance info).
We are still testing hash indexing and select/update/delete performance and
will keep that warning in mind.
I have replicated my setup by running the script by hand (see below).
Can you see anything in the test setup that seems to be wrong?
Thanks for all your feedback.
Cheers,
Paolo
------------------------------------
Notes:
Aside from the port number, all other settings are using default values,
and is only overwritten when we pass in -o on pg_ctl at startup.
The pgbench outputs are grep'ed and awk'ed to only show tps including
connections establishing.
Changed timing run for pgbench to only 10 secs. This was enough to show the
pattern and I didn't want to run the pgbenches again @ 20 mins per data
point.
Note how I am changing the insert sql and watch the pgbench patterns change
when sync commit is off/on and when the indexes are applied to bool or text
columns.
The data is still consistent with my initial results :(. If anyone can spot
where my tests could have gone wrong that would be very much appreciated.
Raw commandline output below.
------------------------------------
demo_server $ cat /etc/*release
CentOS Linux release 7.6.1810 (Core)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
CentOS Linux release 7.6.1810 (Core)
CentOS Linux release 7.6.1810 (Core)
demo_server $ free -m
total used free shared buff/cache
available
Mem: 257773 1941 248149 1352 7683
253379
Swap: 0 0 0
demo_server $ grep cores /proc/cpuinfo |wc -l
64
demo_server $ head /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
stepping : 1
microcode : 0xb000031
cpu MHz : 1820.739
cache size : 46080 KB
physical id : 0
demo_server $ df -h /var/data/
Filesystem Size Used Avail Use% Mounted on
/dev/xvdn1 50G 14G 37G 28% /var/data
demo_server $
demo_server $ sudo -u postgres /usr/pgsql-9.5/bin/pg_ctl start -D
/var/data/test95_data
server starting
demo_server $ < 2019-02-01 00:48:02.079 UTC >LOG: redirecting log output
to logging collector process
< 2019-02-01 00:48:02.079 UTC >HINT: Future log output will appear in
directory "pg_log".
demo_server $ sudo -u postgres /usr/pgsql-11/bin/pg_ctl start -D
/var/data/test11_data
waiting for server to start....2019-02-01 00:50:30.745 UTC [47358] LOG:
listening on IPv4 address "0.0.0.0", port 11000
2019-02-01 00:50:30.745 UTC [47358] LOG: listening on IPv6 address "::",
port 11000
2019-02-01 00:50:30.747 UTC [47358] LOG: listening on Unix socket
"/var/run/postgresql/.s.PGSQL.11000"
2019-02-01 00:50:30.750 UTC [47358] LOG: listening on Unix socket
"/tmp/.s.PGSQL.11000"
2019-02-01 00:50:30.765 UTC [47358] LOG: redirecting log output to logging
collector process
2019-02-01 00:50:30.765 UTC [47358] HINT: Future log output will appear in
directory "log".
done
server started
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $
demo_server $
demo_server $ echo 'INSERT INTO test_indexes (bool_data , int_data ,
text_data ) VALUES ( (RANDOM()*10)::INT % 2 = 0, RANDOM()*10000,
MD5((RANDOM()*1000)::TEXT) );' > /home/postgres/insert_test.sql
demo_server $
demo_server $
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 9500 postgres |grep
tps|head -n1|awk '{print " ",$3}'
43266.931198
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 11000 postgres |grep
tps|head -n1|awk '{print " ",$3}'
45366.444002
demo_server $ sudo -u postgres /usr/pgsql-11/bin/pg_ctl -D
/var/data/test95_data stop
waiting for server to shut down.... done
server stopped
demo_server $ sudo -u postgres /usr/pgsql-11/bin/pg_ctl -D
/var/data/test11_data stop
waiting for server to shut down.... done
server stopped
demo_server $ sudo -u postgres /usr/pgsql-9.5/bin/pg_ctl start -D
/var/data/test95_data -o "-c synchronous_commit=0"
server starting
demo_server $ < 2019-02-01 02:48:01.396 UTC >LOG: redirecting log output
to logging collector process
< 2019-02-01 02:48:01.396 UTC >HINT: Future log output will appear in
directory "pg_log".
demo_server $ sudo -u postgres /usr/pgsql-11/bin/pg_ctl start -D
/var/data/test11_data -o "-c synchronous_commit=0"
waiting for server to start....2019-02-01 02:48:33.186 UTC [64491] LOG:
listening on IPv4 address "0.0.0.0", port 11000
2019-02-01 02:48:33.186 UTC [64491] LOG: listening on IPv6 address "::",
port 11000
2019-02-01 02:48:33.187 UTC [64491] LOG: listening on Unix socket
"/var/run/postgresql/.s.PGSQL.11000"
2019-02-01 02:48:33.190 UTC [64491] LOG: listening on Unix socket
"/tmp/.s.PGSQL.11000"
2019-02-01 02:48:33.204 UTC [64491] LOG: redirecting log output to logging
collector process
2019-02-01 02:48:33.204 UTC [64491] HINT: Future log output will appear in
directory "log".
done
server started
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 9500 postgres |grep
tps|head -n1|awk '{print " ",$3}'
104986.476772
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 11000 postgres |grep
tps|head -n1|awk '{print " ",$3}'
117705.555724
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"create index on
test_indexes using btree (bool_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"create index on
test_indexes using btree (bool_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 9500 postgres |grep
tps|head -n1|awk '{print " ",$3}'
66106.908670
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 11000 postgres |grep
tps|head -n1|awk '{print " ",$3}'
50950.234033
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"create index on
test_indexes using btree (int_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"create index on
test_indexes using btree (int_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 9500 postgres |grep
tps|head -n1|awk '{print " ",$3}'
102645.086816
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 11000 postgres |grep
tps|head -n1|awk '{print " ",$3}'
112472.667768
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"create index on
test_indexes using btree (text_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"create index on
test_indexes using btree (text_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 9500 postgres |grep
tps|head -n1|awk '{print " ",$3}'
103298.461287
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 11000 postgres |grep
tps|head -n1|awk '{print " ",$3}'
113698.041576
demo_server $
demo_server $
demo_server $ sudo -u postgres /usr/pgsql-11/bin/pg_ctl -D
/var/data/test95_data stop
waiting for server to shut down.... done
server stopped
demo_server $ sudo -u postgres /usr/pgsql-11/bin/pg_ctl -D
/var/data/test11_data stop
waiting for server to shut down.... done
server stopped
demo_server $
demo_server $
demo_server $ echo 'INSERT INTO test_indexes (bool_data , int_data ,
text_data ) VALUES ( (RANDOM()*10)::INT % 2 = 0, RANDOM()*10000,
MD5(((RANDOM()*1000)::BIGINT % 2)::TEXT) );' >
/home/postgres/insert_test.sql
demo_server $
demo_server $
demo_server $ sudo -u postgres /usr/pgsql-9.5/bin/pg_ctl start -D
/var/data/test95_data
server starting
demo_server $ < 2019-02-01 03:26:10.233 UTC >LOG: redirecting log output
to logging collector process
< 2019-02-01 03:26:10.233 UTC >HINT: Future log output will appear in
directory "pg_log".
demo_server $ sudo -u postgres /usr/pgsql-11/bin/pg_ctl start -D
/var/data/test11_data
waiting for server to start....2019-02-01 03:26:31.248 UTC [71705] LOG:
listening on IPv4 address "0.0.0.0", port 11000
2019-02-01 03:26:31.248 UTC [71705] LOG: listening on IPv6 address "::",
port 11000
2019-02-01 03:26:31.249 UTC [71705] LOG: listening on Unix socket
"/var/run/postgresql/.s.PGSQL.11000"
2019-02-01 03:26:31.252 UTC [71705] LOG: listening on Unix socket
"/tmp/.s.PGSQL.11000"
2019-02-01 03:26:31.267 UTC [71705] LOG: redirecting log output to logging
collector process
2019-02-01 03:26:31.267 UTC [71705] HINT: Future log output will appear in
directory "log".
done
server started
demo_server $
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
/usr/pgsql-11/bin/psql -Upostgres -p11000 -c"DROP TABLE IF EXISTS
test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 9500 postgres |grep
tps|head -n1|awk '{print " ",$3}'
47055.402951
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 11000 postgres |grep
tps|head -n1|awk '{print " ",$3}'
47844.925367
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"create index on
test_indexes using btree (bool_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"create index on
test_indexes using btree (bool_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 9500 postgres |grep
tps|head -n1|awk '{print " ",$3}'
44519.824705
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 11000 postgres |grep
tps|head -n1|awk '{print " ",$3}'
43483.982157
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"create index on
test_indexes using btree (int_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"create index on
test_indexes using btree (int_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 9500 postgres |grep
tps|head -n1|awk '{print " ",$3}'
43995.744128
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 11000 postgres |grep
tps|head -n1|awk '{print " ",$3}'
44881.281223
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"create index on
test_indexes using btree (text_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"create index on
test_indexes using btree (text_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 9500 postgres |grep
tps|head -n1|awk '{print " ",$3}'
43505.532207
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 11000 postgres |grep
tps|head -n1|awk '{print " ",$3}'
43480.194225
demo_server $ sudo -u postgres /usr/pgsql-11/bin/pg_ctl -D
/var/data/test95_data stop
waiting for server to shut down.... done
server stopped
demo_server $ sudo -u postgres /usr/pgsql-11/bin/pg_ctl -D
/var/data/test11_data stop
waiting for server to shut down.... done
server stopped
demo_server $ sudo -u postgres /usr/pgsql-9.5/bin/pg_ctl start -D
/var/data/test95_data -o "-c synchronous_commit=0"
sudo -u postgres /usr/pgsql-11/bin/pg_ctl start -D /var/data/test11_data -o
"-c synchronous_commit=0"
server starting
demo_server $ sudo -u postgres /usr/pgsql-11/bin/pg_ctl start -D
/var/data/test11_data -o "-c synchronous_commit=0"
waiting for server to start....2019-02-01 03:38:13.884 UTC [75407] LOG:
listening on IPv4 address "0.0.0.0", port 11000
2019-02-01 03:38:13.884 UTC [75407] LOG: listening on IPv6 address "::",
port 11000
2019-02-01 03:38:13.885 UTC [75407] LOG: listening on Unix socket
"/var/run/postgresql/.s.PGSQL.11000"
2019-02-01 03:38:13.888 UTC [75407] LOG: listening on Unix socket
"/tmp/.s.PGSQL.11000"
2019-02-01 03:38:13.902 UTC [75407] LOG: redirecting log output to logging
collector process
2019-02-01 03:38:13.902 UTC [75407] HINT: Future log output will appear in
directory "log".
done
server started
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 9500 postgres |grep
tps|head -n1|awk '{print " ",$3}'
105304.025276
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 11000 postgres |grep
tps|head -n1|awk '{print " ",$3}'
117373.057477
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"create index on
test_indexes using btree (bool_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"create index on
test_indexes using btree (bool_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 9500 postgres |grep
tps|head -n1|awk '{print " ",$3}'
67686.991879
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 11000 postgres |grep
tps|head -n1|awk '{print " ",$3}'
52750.704496
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"create index on
test_indexes using btree (int_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"create index on
test_indexes using btree (int_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 9500 postgres |grep
tps|head -n1|awk '{print " ",$3}'
103595.827322
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 11000 postgres |grep
tps|head -n1|awk '{print " ",$3}'
112841.729898
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"DROP TABLE IF
EXISTS test_indexes; CREATE TABLE test_indexes (test_indexes_id BIGSERIAL,
bool_data BOOLEAN, int_data INTEGER, text_data TEXT, PRIMARY KEY
(test_indexes_id));"
CREATE TABLE
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c"create index on
test_indexes using btree (text_data);"
/usr/pgsql-11/bin/psql -Upostgres -p11000 -c"create index on test_indexes
using btree (text_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c"create index on
test_indexes using btree (text_data);"
CREATE INDEX
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 9500 postgres |grep
tps|head -n1|awk '{print " ",$3}'
62094.775577
demo_server $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/insert_test.sql -n -c 60 -j 60 -T10 -p 11000 postgres |grep
tps|head -n1|awk '{print " ",$3}'
46815.366392
demo_server $
--------------------------------------------------------------------------------------------------------------------------------------------
PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL
VERIZON CONNECT CONFIDENTIALITY NOTICE
This message is intended for the addressee only and may contain confidential and/or privileged material. Any review, re-transmission, dissemination, reliance upon or other use of this information by persons or entities other than the addressee is prohibited. If you receive this in error, please contact the sender and delete this message. Thank you.
Hi Peter,
The last post was too long. I think I have a much more simpler example that
is easier to replicate.
Cheers,
Paolo
------------------------------
postgres=# create table lowc_test (id bigserial, int_data int, primary
key(id));
CREATE TABLE
postgres=# \timing
Timing is on.
postgres=# truncate lowc_test;
postgres=# show synchronous_commit;
synchronous_commit
--------------------
off
(1 row)
Time: 0.445 ms
postgres=# show fsync;
fsync
-------
on
(1 row)
Time: 0.331 ms
postgres=# -- no secondary index
postgres=#
postgres=# insert into lowc_test (int_data) select s from
generate_series(1,5000000) s;
INSERT 0 5000000
Time: 12440.497 ms (00:12.440)
postgres=# truncate lowc_test;
TRUNCATE TABLE
Time: 68.427 ms
postgres=# create index on lowc_test using btree (int_data);
CREATE INDEX
Time: 2.449 ms
postgres=# -- 2ndary index w/ normal sequence of values inserted
postgres=#
postgres=# insert into lowc_test (int_data) select s from
generate_series(1,5000000) s;
INSERT 0 5000000
Time: 17221.095 ms (00:17.221)
postgres=# -- 2ndary index w/ single value inserted
postgres=#
postgres=# truncate lowc_test;
TRUNCATE TABLE
Time: 83.846 ms
postgres=# insert into lowc_test (int_data) select 42 from
generate_series(1,5000000) s;
INSERT 0 5000000
Time: 21440.356 ms (00:21.440)
------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------
PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL
VERIZON CONNECT CONFIDENTIALITY NOTICE
This message is intended for the addressee only and may contain confidential and/or privileged material. Any review, re-transmission, dissemination, reliance upon or other use of this information by persons or entities other than the addressee is prohibited. If you receive this in error, please contact the sender and delete this message. Thank you.
On Sun, Feb 3, 2019 at 2:45 PM Saul, Jean Paolo
<paolo.saul@verizonconnect.com> wrote:
The last post was too long. I think I have a much more simpler example that is easier to replicate.
This new example is very similar to examples that I have personally
come up with. I have no difficulty explaining why the case with lots
of duplicates is slower, so it doesn't really help.
I cannot account for why you can observe a difference across Postgres
versions, though -- that's what I'm having difficulty with. Are you
sure about that effect? There haven't been any directly relevant
changes in this area in many years.
--
Peter Geoghegan
Hi Peter,
Thanks! I see.
It looks like I cannot replicate it in one transaction, but I have to use
pgbench instead.
Another simple test output is below.
What do you think?
Cheers,
Paolo
p.s.
synchronous_commit = off
pg9.5 on port 9500
pg11 on port 11000
-------
demo_server.pg $ echo 'INSERT INTO lowc_test (int_data) SELECT 42; --
arbitrary ' > /home/postgres/simple_insert_low.sql
demo_server.pg $
----- POSTGRESQL 9.5 -----
demo_server.pg $
demo_server.pg $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c 'CREATE TABLE
lowc_test (id bigserial, int_data int, PRIMARY KEY(id))'
CREATE TABLE
demo_server.pg $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/simple_insert_low.sql -n -c 60 -j 60 -T10 -p 9500 postgres
transaction type: /home/postgres/simple_insert_low.sql
scaling factor: 1
query mode: simple
number of clients: 60
number of threads: 60
duration: 10 s
number of transactions actually processed: 1083976
latency average = 0.554 ms
tps = 108379.219155 (including connections establishing)
tps = 108472.988431 (excluding connections establishing)
demo_server.pg $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c 'CREATE INDEX
ON lowc_test USING BTREE (int_data);'
CREATE INDEX
demo_server.pg $ /usr/pgsql-11/bin/psql -Upostgres -p9500 -c 'TRUNCATE
lowc_test;'
TRUNCATE TABLE
demo_server.pg $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/simple_insert_low.sql -n -c 60 -j 60 -T10 -p 9500 postgres
transaction type: /home/postgres/simple_insert_low.sql
scaling factor: 1
query mode: simple
number of clients: 60
number of threads: 60
duration: 10 s
number of transactions actually processed: 478797
latency average = 1.254 ms
*tps = 47865.701374 (including connections establishing)*
*tps = 47909.167492 (excluding connections establishing)*
----- POSTGRESQL 11 -----
demo_server.pg $
demo_server.pg $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c 'CREATE TABLE
lowc_test (id bigserial, int_data int, PRIMARY KEY(id))'
CREATE TABLE
demo_server.pg $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/simple_insert_low.sql -n -c 60 -j 60 -T10 -p 11000 postgres
transaction type: /home/postgres/simple_insert_low.sql
scaling factor: 1
query mode: simple
number of clients: 60
number of threads: 60
duration: 10 s
number of transactions actually processed: 1215185
latency average = 0.494 ms
tps = 121488.366924 (including connections establishing)
tps = 121610.790950 (excluding connections establishing)
demo_server.pg $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c 'CREATE INDEX
ON lowc_test USING BTREE (int_data);'
CREATE INDEX
demo_server.pg $ /usr/pgsql-11/bin/psql -Upostgres -p11000 -c 'TRUNCATE
lowc_test;'
TRUNCATE TABLE
demo_server.pg $ /usr/pgsql-11/bin/pgbench -Upostgres -f
/home/postgres/simple_insert_low.sql -n -c 60 -j 60 -T10 -p 11000 postgres
transaction type: /home/postgres/simple_insert_low.sql
scaling factor: 1
query mode: simple
number of clients: 60
number of threads: 60
duration: 10 s
number of transactions actually processed: 341168
latency average = 1.759 ms
*tps = 34100.743631 (including connections establishing)*
*tps = 34137.949909 (excluding connections establishing)*
On Mon, 4 Feb 2019 at 12:10, Peter Geoghegan <pg@bowt.ie> wrote:
On Sun, Feb 3, 2019 at 2:45 PM Saul, Jean Paolo
<paolo.saul@verizonconnect.com> wrote:The last post was too long. I think I have a much more simpler example
that is easier to replicate.
This new example is very similar to examples that I have personally
come up with. I have no difficulty explaining why the case with lots
of duplicates is slower, so it doesn't really help.I cannot account for why you can observe a difference across Postgres
versions, though -- that's what I'm having difficulty with. Are you
sure about that effect? There haven't been any directly relevant
changes in this area in many years.--
Peter Geoghegan
--------------------------------------------------------------------------------------------------------------------------------------------
PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL
VERIZON CONNECT CONFIDENTIALITY NOTICE
This message is intended for the addressee only and may contain confidential and/or privileged material. Any review, re-transmission, dissemination, reliance upon or other use of this information by persons or entities other than the addressee is prohibited. If you receive this in error, please contact the sender and delete this message. Thank you.
On Fri, Feb 1, 2019 at 10:49:13AM +1300, Saul, Jean Paolo wrote:
Hi Bruce,
Thank you for that info. We will likely migrate our boolean indexes (and
possibly all our low cardinality indexes) to hash after we move to PG11.
Uh, there is rarely value in creating boolean indexes because, for an
index to be useful, it should have high selectivity. What people often
do is to create _partial_ indexes on true, false, or NULL values that
are of high selectivity. Since there is only a single value in the
index, I guess a hash index would be better than btree, but I am not
sure.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +