Microvacuum support for Hash Index
Hi All,
I have added a microvacuum support for hash index access method and
attached is the v1 patch for the same. The patch basically takes care
of the following things:
1. Firstly, it changes the marking of dead tuples from
'tuple-at-a-time' to 'page-at-a-time' during hash index scan. For this
we accumulate the heap tids and offset of all the hash index tuples if
it is pointed by kill_prior_tuple during scan and then mark all
accumulated tids as LP_DEAD either while stepping from one page to
another (assuming the scan in both forward and backward direction) or
during end of the hash index scan or during rescan.
2. Secondly, when inserting tuple into hash index table, if not enough
space is found on a current page then it ensures that we first clean
the dead tuples if found in the current hash index page before moving
to the next page in a bucket chain or going for a bucket split. This
basically increases the page reusability and reduces the number of
page splits, thereby reducing the overall size of hash index table.
I have compared the hash index size with and without my patch
(microvacuum_hash_index_v1.patch attached with this mail) on a high
end machine at various scale factors and the results are shown below.
For testing this, i have created hash index (pgbench_accounts_aid) on
aid column of 'pgbench_accounts' table instead of primary key and the
results shown in below table are for the same. The patch
(pgbench.patch) having these changes is also attached with this mail.
Moreover, I am using my own script file (file_hash_kill_prior_tuple)
for updating the index column with pgbench read-write command. The
script file 'file_hash_kill_prior_tuple' is also attached with this
mail.
Here are some initial test results showing the benefit of this patch:
postgresql.conf and pgbench settings:
autovacuum=off
client counts = 64
run time duration = 15 mins
./pgbench -c $threads -j $threads -T 900 postgres -f
~/file_hash_kill_prior_tuple
Scale Factor hash index size @ start HEAD HEAD + Patch
10 32 MB 579 MB 158 MB
50 128 MB 630 MB 350 MB
100 256 MB 1255 MB 635 MB
300 1024 MB 2233 MB 1093 MB
As shown in above result, at 10 scale factor the hash index size has
reduced by almost 4 times whereas at 50 and 300 scale factors it has
reduced by half with my patch. This basically proves that we can
reduce the hash index size to a good extent with this patch.
System specifications:
---------------------------------
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 8
NUMA node(s): 8
Vendor ID: GenuineIntel
Note: The patch (microvacuum_hash_index_v1.patch) is prepared on top
of concurrent_hash_index_v8.patch-[1]/messages/by-id/CAA4eK1+X=8sUd1UCZDZnE3D9CGi9kw+kjxp2Tnw7SX5w8pLBNw@mail.gmail.com and wal_hash_index_v5.1.patch[2]/messages/by-id/CAA4eK1KE=+kkowyYD0vmch=ph4ND3H1tViAB+0cWTHqjZDDfqg@mail.gmail.com
for hash index.
[1]: /messages/by-id/CAA4eK1+X=8sUd1UCZDZnE3D9CGi9kw+kjxp2Tnw7SX5w8pLBNw@mail.gmail.com
[2]: /messages/by-id/CAA4eK1KE=+kkowyYD0vmch=ph4ND3H1tViAB+0cWTHqjZDDfqg@mail.gmail.com
On Mon, Oct 24, 2016 at 2:21 PM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:
Hi All,
I have added a microvacuum support for hash index access method and
attached is the v1 patch for the same.
This is an important functionality for hash index as we already do
have same functionality for other types of indexes like btree.
The patch basically takes care
of the following things:1. Firstly, it changes the marking of dead tuples from
'tuple-at-a-time' to 'page-at-a-time' during hash index scan. For this
we accumulate the heap tids and offset of all the hash index tuples if
it is pointed by kill_prior_tuple during scan and then mark all
accumulated tids as LP_DEAD either while stepping from one page to
another (assuming the scan in both forward and backward direction) or
during end of the hash index scan or during rescan.2. Secondly, when inserting tuple into hash index table, if not enough
space is found on a current page then it ensures that we first clean
the dead tuples if found in the current hash index page before moving
to the next page in a bucket chain or going for a bucket split. This
basically increases the page reusability and reduces the number of
page splits, thereby reducing the overall size of hash index table.
Few comments on patch:
1.
+static void
+hash_xlog_vacuum_one_page(XLogReaderState *record)
+{
+ XLogRecPtr lsn = record->EndRecPtr;
+ xl_hash_vacuum *xldata = (xl_hash_vacuum *) XLogRecGetData(record);
+ Buffer bucketbuf = InvalidBuffer;
+ Buffer buffer;
+ Buffer metabuf;
+ Page page;
+ XLogRedoAction action;
While replaying the delete/vacuum record on standby, it can conflict
with some already running queries. Basically the replay can remove
some row which can be visible on standby. You need to resolve
conflicts similar to what we do in btree delete records (refer
btree_xlog_delete).
2.
+ /*
+ * Write-lock the meta page so that we can decrement
+ * tuple count.
+ */
+ _hash_chgbufaccess(rel, metabuf, HASH_NOLOCK, HASH_WRITE);
+
+ _hash_vacuum_one_page(rel, metabuf, buf, bucket_buf,
+ (buf == bucket_buf) ? true : false);
+
+ _hash_chgbufaccess(rel, metabuf, HASH_READ, HASH_NOLOCK);
It seems here meta page lock is acquired for duration more than
required and also it is not required when there are no deletable items
on page. You can take the metapage lock before decrementing the count.
3.
Assert(offnum <= maxoff);
+
Spurious space. There are some other similar spurious white space
changes in patch, remove them as well.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Amit,
Thanks for showing your interest and reviewing my patch. I have
started looking into your review comments. I will share the updated
patch in a day or two.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
On Fri, Oct 28, 2016 at 4:42 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
On Mon, Oct 24, 2016 at 2:21 PM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:
Hi All,
I have added a microvacuum support for hash index access method and
attached is the v1 patch for the same.This is an important functionality for hash index as we already do
have same functionality for other types of indexes like btree.The patch basically takes care
of the following things:1. Firstly, it changes the marking of dead tuples from
'tuple-at-a-time' to 'page-at-a-time' during hash index scan. For this
we accumulate the heap tids and offset of all the hash index tuples if
it is pointed by kill_prior_tuple during scan and then mark all
accumulated tids as LP_DEAD either while stepping from one page to
another (assuming the scan in both forward and backward direction) or
during end of the hash index scan or during rescan.2. Secondly, when inserting tuple into hash index table, if not enough
space is found on a current page then it ensures that we first clean
the dead tuples if found in the current hash index page before moving
to the next page in a bucket chain or going for a bucket split. This
basically increases the page reusability and reduces the number of
page splits, thereby reducing the overall size of hash index table.Few comments on patch:
1. +static void +hash_xlog_vacuum_one_page(XLogReaderState *record) +{ + XLogRecPtr lsn = record->EndRecPtr; + xl_hash_vacuum *xldata = (xl_hash_vacuum *) XLogRecGetData(record); + Buffer bucketbuf = InvalidBuffer; + Buffer buffer; + Buffer metabuf; + Page page; + XLogRedoAction action;While replaying the delete/vacuum record on standby, it can conflict
with some already running queries. Basically the replay can remove
some row which can be visible on standby. You need to resolve
conflicts similar to what we do in btree delete records (refer
btree_xlog_delete).2. + /* + * Write-lock the meta page so that we can decrement + * tuple count. + */ + _hash_chgbufaccess(rel, metabuf, HASH_NOLOCK, HASH_WRITE); + + _hash_vacuum_one_page(rel, metabuf, buf, bucket_buf, + (buf == bucket_buf) ? true : false); + + _hash_chgbufaccess(rel, metabuf, HASH_READ, HASH_NOLOCK);It seems here meta page lock is acquired for duration more than
required and also it is not required when there are no deletable items
on page. You can take the metapage lock before decrementing the count.3.
Assert(offnum <= maxoff);
+Spurious space. There are some other similar spurious white space
changes in patch, remove them as well.--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
While replaying the delete/vacuum record on standby, it can conflict
with some already running queries. Basically the replay can remove
some row which can be visible on standby. You need to resolve
conflicts similar to what we do in btree delete records (refer
btree_xlog_delete).
Agreed. Thanks for putting this point. I have taken care of it in the
attached v2 patch.
+ /* + * Write-lock the meta page so that we can decrement + * tuple count. + */ + _hash_chgbufaccess(rel, metabuf, HASH_NOLOCK, HASH_WRITE); + + _hash_vacuum_one_page(rel, metabuf, buf, bucket_buf, + (buf == bucket_buf) ? true : false); + + _hash_chgbufaccess(rel, metabuf, HASH_READ, HASH_NOLOCK);It seems here meta page lock is acquired for duration more than
required and also it is not required when there are no deletable items
on page. You can take the metapage lock before decrementing the count.
Ok. Corrected. Please refer to the attached v2 patch.
Spurious space. There are some other similar spurious white space
changes in patch, remove them as well.
Corrected. Please refer attached v2 patch.
Attachments:
microvacuum_hash_index_v2.patchtext/x-patch; charset=US-ASCII; name=microvacuum_hash_index_v2.patchDownload+529-15
Hi,
On 11/02/2016 01:38 AM, Ashutosh Sharma wrote:
While replaying the delete/vacuum record on standby, it can conflict
with some already running queries. Basically the replay can remove
some row which can be visible on standby. You need to resolve
conflicts similar to what we do in btree delete records (refer
btree_xlog_delete).Agreed. Thanks for putting this point. I have taken care of it in the
attached v2 patch.
Some initial comments.
_hash_vacuum_one_page:
+ END_CRIT_SECTION();
+ _hash_chgbufaccess(rel, metabuf, HASH_READ, HASH_NOLOCK);
The _hash_chgbufaccess() needs a comment.
You also need a place where you call pfree for so->killedItems - maybe
in hashkillitems().
Best regards,
Jesper
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Jesper,
Some initial comments.
_hash_vacuum_one_page:
+ END_CRIT_SECTION(); + _hash_chgbufaccess(rel, metabuf, HASH_READ, HASH_NOLOCK);The _hash_chgbufaccess() needs a comment.
You also need a place where you call pfree for so->killedItems - maybe in
hashkillitems().
Thanks for reviewing this patch. I would like to update you that this
patch has got dependency on a patch for concurrent hash index and WAL
log in hash index. So, till these two patches for hash index are not
stable I won't be able to share you a next version of patch for
supporting microvacuum in hash index.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 11/11/2016 12:11 AM, Ashutosh Sharma wrote:
Thanks for reviewing this patch. I would like to update you that this
patch has got dependency on a patch for concurrent hash index and WAL
log in hash index. So, till these two patches for hash index are not
stable I won't be able to share you a next version of patch for
supporting microvacuum in hash index.
As the concurrent hash index patch was committed in 6d46f4 this patch
needs a rebase.
I have moved this submission to the next CF.
Thanks for working on this !
Best regards,
Jesper
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 11/11/2016 12:11 AM, Ashutosh Sharma wrote:
Hi Jesper,
Some initial comments.
_hash_vacuum_one_page:
+ END_CRIT_SECTION(); + _hash_chgbufaccess(rel, metabuf, HASH_READ, HASH_NOLOCK);The _hash_chgbufaccess() needs a comment.
You also need a place where you call pfree for so->killedItems - maybe in
hashkillitems().Thanks for reviewing this patch. I would like to update you that this
patch has got dependency on a patch for concurrent hash index and WAL
log in hash index. So, till these two patches for hash index are not
stable I won't be able to share you a next version of patch for
supporting microvacuum in hash index.
This can be rebased on the WAL v7 patch [1]/messages/by-id/CAA4eK1+dmGNTFMnLO4EbOWJDHUq=+a2L8T=72ifXeh-Kd8HOsg@mail.gmail.com. In addition to the previous
comments you need to take commit 7819ba into account.
[1]: /messages/by-id/CAA4eK1+dmGNTFMnLO4EbOWJDHUq=+a2L8T=72ifXeh-Kd8HOsg@mail.gmail.com
/messages/by-id/CAA4eK1+dmGNTFMnLO4EbOWJDHUq=+a2L8T=72ifXeh-Kd8HOsg@mail.gmail.com
Best regards,
Jesper
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
This can be rebased on the WAL v7 patch [1]. In addition to the previous
comments you need to take commit 7819ba into account.
Attached is the v3 patch rebased on postgreSQL HEAD and WAL v7 patch.
It also takes care of all the previous comments from Jesper - [1]/messages/by-id/a751842f-2aed-9f2e-104c-34cfe06bfbe2@redhat.com.
Also, I have changed the status of this patch to "Needs review" for
this commit-fest.
[1]: /messages/by-id/a751842f-2aed-9f2e-104c-34cfe06bfbe2@redhat.com
With Regards,
Ashutosh Sharma.
EnterpriseDB: http://www.enterprisedb.com
Attachments:
microvacuum_hash_index_v3.patchinvalid/octet-stream; name=microvacuum_hash_index_v3.patchDownload+538-16
Hi Ashutosh,
On 01/04/2017 06:13 AM, Ashutosh Sharma wrote:
Attached is the v3 patch rebased on postgreSQL HEAD and WAL v7 patch.
It also takes care of all the previous comments from Jesper - [1].
With an --enable-cassert build (master / WAL v7 / MV v3) and
-- ddl.sql --
CREATE TABLE test AS SELECT generate_series(1, 10) AS id, 0 AS val;
CREATE INDEX IF NOT EXISTS idx_id ON test USING hash (id);
CREATE INDEX IF NOT EXISTS idx_val ON test USING hash (val);
ANALYZE;
-- ddl.sql --
-- test.sql --
\set id random(1,10)
\set val random(0,10)
BEGIN;
UPDATE test SET val = :val WHERE id = :id;
COMMIT;
-- test.sql --
using pgbench -M prepared -c 10 -j 10 -T 600 -f test.sql test
crashes after a few minutes with
TRAP: FailedAssertion("!(LWLockHeldByMeInMode(((LWLock*)
(&(bufHdr)->content_lock)), LW_EXCLUSIVE))", File: "bufmgr.c", Line: 3781)
BTW, better rename 'hashkillitems' to '_hash_kill_items' to follow the
naming convention in hash.h
Best regards,
Jesper
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
using pgbench -M prepared -c 10 -j 10 -T 600 -f test.sql test
crashes after a few minutes with
TRAP: FailedAssertion("!(LWLockHeldByMeInMode(((LWLock*)
(&(bufHdr)->content_lock)), LW_EXCLUSIVE))", File: "bufmgr.c", Line: 3781)
Attached v4 patch fixes this assertion failure.
BTW, better rename 'hashkillitems' to '_hash_kill_items' to follow the
naming convention in hash.h
okay, I have renamed 'hashkillitems' to '_hash_kill_items'. Please
check the attached v4 patch.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
Attachments:
microvacuum_hash_index_v4.patchinvalid/octet-stream; name=microvacuum_hash_index_v4.patchDownload+551-16
Hi Ashutosh,
On 01/06/2017 12:54 AM, Ashutosh Sharma wrote:
using pgbench -M prepared -c 10 -j 10 -T 600 -f test.sql test
crashes after a few minutes with
TRAP: FailedAssertion("!(LWLockHeldByMeInMode(((LWLock*)
(&(bufHdr)->content_lock)), LW_EXCLUSIVE))", File: "bufmgr.c", Line: 3781)Attached v4 patch fixes this assertion failure.
Yes, that fixes the problem.
However (master / WAL v7 / MV v4),
--- ddl.sql ---
CREATE TABLE test AS SELECT generate_series(1, 10) AS id, 0 AS val;
CREATE INDEX IF NOT EXISTS idx_id ON test USING hash (id);
CREATE INDEX IF NOT EXISTS idx_val ON test USING hash (val);
ANALYZE;
--- ddl.sql ---
--- test.sql ---
\set id random(1,10)
\set val random(0,10)
BEGIN;
DELETE FROM test WHERE id = :id;
INSERT INTO test VALUES (:id, :val);
COMMIT;
--- test.sql ---
gives
#9 0x000000000098a83e in elog_finish (elevel=20, fmt=0xb6ea92
"incorrect local pin count: %d") at elog.c:1378
#10 0x00000000007f0b33 in LockBufferForCleanup (buffer=1677) at
bufmgr.c:3605
#11 0x0000000000549390 in XLogReadBufferForRedoExtended
(record=0x2afced8, block_id=1 '\001', mode=RBM_NORMAL,
get_cleanup_lock=1 '\001', buf=0x7ffe3ee27c8c) at xlogutils.c:394
#12 0x00000000004c5026 in hash_xlog_vacuum_one_page (record=0x2afced8)
at hash_xlog.c:1109
#13 0x00000000004c5547 in hash_redo (record=0x2afced8) at hash_xlog.c:1214
#14 0x000000000053a361 in StartupXLOG () at xlog.c:6975
#15 0x00000000007a4ca0 in StartupProcessMain () at startup.c:216
on the slave instance in a master-slave setup.
Also, the src/backend/access/README file should be updated with a
description of the changes which this patch provides.
Best regards,
Jesper
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Jesper,
However (master / WAL v7 / MV v4),
--- ddl.sql --- CREATE TABLE test AS SELECT generate_series(1, 10) AS id, 0 AS val; CREATE INDEX IF NOT EXISTS idx_id ON test USING hash (id); CREATE INDEX IF NOT EXISTS idx_val ON test USING hash (val); ANALYZE; --- ddl.sql ------ test.sql --- \set id random(1,10) \set val random(0,10) BEGIN; DELETE FROM test WHERE id = :id; INSERT INTO test VALUES (:id, :val); COMMIT; --- test.sql ---gives
#9 0x000000000098a83e in elog_finish (elevel=20, fmt=0xb6ea92 "incorrect
local pin count: %d") at elog.c:1378
#10 0x00000000007f0b33 in LockBufferForCleanup (buffer=1677) at
bufmgr.c:3605
#11 0x0000000000549390 in XLogReadBufferForRedoExtended (record=0x2afced8,
block_id=1 '\001', mode=RBM_NORMAL, get_cleanup_lock=1 '\001',
buf=0x7ffe3ee27c8c) at xlogutils.c:394
#12 0x00000000004c5026 in hash_xlog_vacuum_one_page (record=0x2afced8) at
hash_xlog.c:1109
#13 0x00000000004c5547 in hash_redo (record=0x2afced8) at hash_xlog.c:1214
#14 0x000000000053a361 in StartupXLOG () at xlog.c:6975
#15 0x00000000007a4ca0 in StartupProcessMain () at startup.c:216on the slave instance in a master-slave setup.
Thanks for reporting this problem. It is basically coming because i
forgot to unpin the bucketbuf in hash_xlog_vacuum_one_page(). Please
find the attached v5 patch that fixes the issue.
Also, the src/backend/access/README file should be updated with a
description of the changes which this patch provides.
okay, I have updated the insertion algorithm in the README file.
--
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
Attachments:
microvacuum_hash_index_v5.patchinvalid/octet-stream; name=microvacuum_hash_index_v5.patchDownload+558-17
Hi Ashutosh,
On 01/10/2017 05:24 AM, Ashutosh Sharma wrote:
Thanks for reporting this problem. It is basically coming because i
forgot to unpin the bucketbuf in hash_xlog_vacuum_one_page(). Please
find the attached v5 patch that fixes the issue.
The crash is now fixed, but the
--- test.sql ---
\set id random(1,10)
\set val random(0,10)
BEGIN;
UPDATE test SET val = :val WHERE id = :id;
COMMIT;
--- test.sql ---
case gives
client 6 aborted in command 3 of script 0; ERROR: deadlock detected
DETAIL: Process 14608 waits for ShareLock on transaction 1444620;
blocked by process 14610.
Process 14610 waits for ShareLock on transaction 1444616; blocked by
process 14608.
HINT: See server log for query details.
CONTEXT: while rechecking updated tuple (12,3) in relation "test"
...
using pgbench -M prepared -c 10 -j 10 -T 300 -f test.sql test
Best regards,
Jesper
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Ashutosh,
On 01/10/2017 08:40 AM, Jesper Pedersen wrote:
On 01/10/2017 05:24 AM, Ashutosh Sharma wrote:
Thanks for reporting this problem. It is basically coming because i
forgot to unpin the bucketbuf in hash_xlog_vacuum_one_page(). Please
find the attached v5 patch that fixes the issue.The crash is now fixed, but the
--- test.sql --- \set id random(1,10) \set val random(0,10) BEGIN; UPDATE test SET val = :val WHERE id = :id; COMMIT; --- test.sql ---case gives
client 6 aborted in command 3 of script 0; ERROR: deadlock detected
DETAIL: Process 14608 waits for ShareLock on transaction 1444620;
blocked by process 14610.
Process 14610 waits for ShareLock on transaction 1444616; blocked by
process 14608.
HINT: See server log for query details.
CONTEXT: while rechecking updated tuple (12,3) in relation "test"
...using pgbench -M prepared -c 10 -j 10 -T 300 -f test.sql test
I'm not seeing this deadlock with just the WAL v8 patch applied.
Best regards,
Jesper
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Jesper,
I'm not seeing this deadlock with just the WAL v8 patch applied.
okay, Thanks for confirming that.
I would like to update you that I am not able to reproduce this issue
at my end. I suspect that the steps i am following might be slightly
different than your's. Could you please have a look at steps mentioned
below and confirm if there is something different that I am doing.
Firstly, I am running the test-case on following git commit in head:
<git-commmit>
commit ba61a04bc7fefeee03416d9911eb825c4897c223
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Thu Jan 19 19:52:13 2017 -0500
Avoid core dump for empty prepared statement in an aborted transaction.
Brown-paper-bag bug in commit ab1f0c822: the old code here coped with
null CachedPlanSource.raw_parse_tree, the new code not so much.
Per report from Dave Cramer.
</git-commit>
On top of above commit, I have applied WAL v8 patch for hash index and
MV v5 patch.
Now, with an --enable-cassert build I am following below steps:
1) Created a 'test' database
2) psql -d test -f ~/ddl.sql
where ddl.sql is,
-- ddl.sql --
CREATE TABLE test AS SELECT generate_series(1, 10) AS id, 0 AS val;
CREATE INDEX IF NOT EXISTS idx_id ON test USING hash (id);
CREATE INDEX IF NOT EXISTS idx_val ON test USING hash (val);
ANALYZE;
-- ddl.sql --
3) pgbench -M prepared -c 10 -j 10 -T 1800 -f ~/test.sql test
where test.sql is,
-- test.sql --
\set id random(1,10)
\set val random(0,10)
BEGIN;
UPDATE test SET val = :val WHERE id = :id;
COMMIT;
-- test.sql --
Machine details are as follows:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 8
Also, It would be great if you could confirm as if you have been
getting this issue repeatedly. Thanks.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Ashutosh,
On 01/20/2017 04:18 AM, Ashutosh Sharma wrote:
okay, Thanks for confirming that.
I would like to update you that I am not able to reproduce this issue
at my end. I suspect that the steps i am following might be slightly
different than your's. Could you please have a look at steps mentioned
below and confirm if there is something different that I am doing.Firstly, I am running the test-case on following git commit in head:
<git-commmit>
commit ba61a04bc7fefeee03416d9911eb825c4897c223
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Thu Jan 19 19:52:13 2017 -0500Avoid core dump for empty prepared statement in an aborted transaction.
Brown-paper-bag bug in commit ab1f0c822: the old code here coped with
null CachedPlanSource.raw_parse_tree, the new code not so much.
Per report from Dave Cramer.
</git-commit>On top of above commit, I have applied WAL v8 patch for hash index and
MV v5 patch.Now, with an --enable-cassert build I am following below steps:
1) Created a 'test' database
2) psql -d test -f ~/ddl.sql
where ddl.sql is,
-- ddl.sql --
CREATE TABLE test AS SELECT generate_series(1, 10) AS id, 0 AS val;
CREATE INDEX IF NOT EXISTS idx_id ON test USING hash (id);
CREATE INDEX IF NOT EXISTS idx_val ON test USING hash (val);
ANALYZE;
-- ddl.sql --3) pgbench -M prepared -c 10 -j 10 -T 1800 -f ~/test.sql test
where test.sql is,
-- test.sql --
\set id random(1,10)
\set val random(0,10)
BEGIN;
UPDATE test SET val = :val WHERE id = :id;
COMMIT;
-- test.sql --Machine details are as follows:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 8Also, It would be great if you could confirm as if you have been
getting this issue repeatedly. Thanks.
Yeah, those are the steps; just with a Skylake laptop.
However, I restarted with a fresh master, with WAL v8 and MV v5, and
can't reproduce the issue.
Best regards,
Jesper
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Ashutosh,
On 01/20/2017 03:24 PM, Jesper Pedersen wrote:
Yeah, those are the steps; just with a Skylake laptop.
However, I restarted with a fresh master, with WAL v8 and MV v5, and
can't reproduce the issue.
I have done some more testing with this, and have moved to the patch
back to 'Needs Review' pending Amit's comments.
Best regards,
Jesper
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 01/23/2017 02:53 PM, Jesper Pedersen wrote:
I have done some more testing with this, and have moved to the patch
back to 'Needs Review' pending Amit's comments.
Moved to "Ready for Committer".
Best regards,
Jesper
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Jan 26, 2017 at 6:38 PM, Jesper Pedersen
<jesper.pedersen@redhat.com> wrote:
On 01/23/2017 02:53 PM, Jesper Pedersen wrote:
I have done some more testing with this, and have moved to the patch
back to 'Needs Review' pending Amit's comments.Moved to "Ready for Committer".
Don't you think we should try to identify the reason of the deadlock
error reported by you up thread [1]/messages/by-id/dc6d7247-050f-4014-8c80-a4ee676eb384@redhat.com? I know that you and Ashutosh are
not able to reproduce it, but still I feel some investigation is
required to find the reason. It is quite possible that the test case
is such that the deadlock is expected in rare cases, if that is the
case then it is okay. I have not spent enough time on that to comment
whether it is a test or code issue.
[1]: /messages/by-id/dc6d7247-050f-4014-8c80-a4ee676eb384@redhat.com
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers