[POC] Faster processing at Gather node

Started by Rafia Sabihalmost 9 years ago64 messageshackers
Jump to latest
#1Rafia Sabih
rafia.sabih@enterprisedb.com

Hello everybody,

While analysing the performance of TPC-H queries for the newly developed
parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed
that the time taken by gather node is significant. On investigation, as per
the current method it copies each tuple to the shared queue and notifies
the receiver. Since, this copying is done in shared queue, a lot of locking
and latching overhead is there.

So, in this POC patch I tried to copy all the tuples in a local queue thus
avoiding all the locks and latches. Once, the local queue is filled as per
it's capacity, tuples are transferred to the shared queue. Once, all the
tuples are transferred the receiver is sent the notification about the same.

With this patch I could see significant improvement in performance for
simple queries,

head:
explain analyse select * from t where i < 30000000;
QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------
Gather (cost=0.00..83225.55 rows=29676454 width=19) (actual
time=1.379..35871.235 rows=29999999 loops=1)
Workers Planned: 64
Workers Launched: 64
-> Parallel Seq Scan on t (cost=0.00..83225.55 rows=463695 width=19)
(actual time=0.125..1415.521 rows=461538 loops=65)
Filter: (i < 30000000)
Rows Removed by Filter: 1076923
Planning time: 0.180 ms
Execution time: 38503.478 ms
(8 rows)

patch:
explain analyse select * from t where i < 30000000;
QUERY PLAN

----------------------------------------------------------------------------------------------------------------------------
Gather (cost=0.00..83225.55 rows=29676454 width=19) (actual
time=0.980..24499.427 rows=29999999 loops=1)
Workers Planned: 64
Workers Launched: 64
-> Parallel Seq Scan on t (cost=0.00..83225.55 rows=463695 width=19)
(actual time=0.088..968.406 rows=461538 loops=65)
Filter: (i < 30000000)
Rows Removed by Filter: 1076923
Planning time: 0.158 ms
Execution time: 27331.849 ms
(8 rows)

head:
explain analyse select * from t where i < 40000000;
QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------
Gather (cost=0.00..83225.55 rows=39501511 width=19) (actual
time=0.890..38438.753 rows=39999999 loops=1)
Workers Planned: 64
Workers Launched: 64
-> Parallel Seq Scan on t (cost=0.00..83225.55 rows=617211 width=19)
(actual time=0.074..1235.180 rows=615385 loops=65)
Filter: (i < 40000000)
Rows Removed by Filter: 923077
Planning time: 0.113 ms
Execution time: 41609.855 ms
(8 rows)

patch:
explain analyse select * from t where i < 40000000;
QUERY PLAN

----------------------------------------------------------------------------------------------------------------------------
Gather (cost=0.00..83225.55 rows=39501511 width=19) (actual
time=1.085..31806.671 rows=39999999 loops=1)
Workers Planned: 64
Workers Launched: 64
-> Parallel Seq Scan on t (cost=0.00..83225.55 rows=617211 width=19)
(actual time=0.083..954.342 rows=615385 loops=65)
Filter: (i < 40000000)
Rows Removed by Filter: 923077
Planning time: 0.151 ms
Execution time: 35341.429 ms
(8 rows)

head:
explain analyse select * from t where i < 45000000;
QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------------
Gather (cost=0.00..102756.80 rows=44584013 width=19) (actual
time=0.563..49156.252 rows=44999999 loops=1)
Workers Planned: 32
Workers Launched: 32
-> Parallel Seq Scan on t (cost=0.00..102756.80 rows=1393250 width=19)
(actual time=0.069..1905.436 rows=1363636 loops=33)
Filter: (i < 45000000)
Rows Removed by Filter: 1666667
Planning time: 0.106 ms
Execution time: 52722.476 ms
(8 rows)

patch:
explain analyse select * from t where i < 45000000;
QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------------
Gather (cost=0.00..102756.80 rows=44584013 width=19) (actual
time=0.545..37501.200 rows=44999999 loops=1)
Workers Planned: 32
Workers Launched: 32
-> Parallel Seq Scan on t (cost=0.00..102756.80 rows=1393250 width=19)
(actual time=0.068..2165.430 rows=1363636 loops=33)
Filter: (i < 45000000)
Rows Removed by Filter: 1666667
Planning time: 0.087 ms
Execution time: 41458.969 ms
(8 rows)

The improvement in performance is most when the selectivity is around
20-30%, in which case currently parallelism is not selected.

I am testing the performance impact of this on TPC-H queries, in the
meantime would appreciate some feedback on the design, etc.

--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/

Attachments:

faster_gather.patchapplication/octet-stream; name=faster_gather.patchDownload+359-4
#2Robert Haas
robertmhaas@gmail.com
In reply to: Rafia Sabih (#1)
Re: [POC] Faster processing at Gather node

On Fri, May 19, 2017 at 7:55 AM, Rafia Sabih
<rafia.sabih@enterprisedb.com> wrote:

While analysing the performance of TPC-H queries for the newly developed
parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed
that the time taken by gather node is significant. On investigation, as per
the current method it copies each tuple to the shared queue and notifies the
receiver. Since, this copying is done in shared queue, a lot of locking and
latching overhead is there.

So, in this POC patch I tried to copy all the tuples in a local queue thus
avoiding all the locks and latches. Once, the local queue is filled as per
it's capacity, tuples are transferred to the shared queue. Once, all the
tuples are transferred the receiver is sent the notification about the same.

What if, instead of doing this, we switched the shm_mq stuff to use atomics?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#2)
Re: [POC] Faster processing at Gather node

On Fri, May 19, 2017 at 5:58 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, May 19, 2017 at 7:55 AM, Rafia Sabih
<rafia.sabih@enterprisedb.com> wrote:

While analysing the performance of TPC-H queries for the newly developed
parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed
that the time taken by gather node is significant. On investigation, as per
the current method it copies each tuple to the shared queue and notifies the
receiver. Since, this copying is done in shared queue, a lot of locking and
latching overhead is there.

So, in this POC patch I tried to copy all the tuples in a local queue thus
avoiding all the locks and latches. Once, the local queue is filled as per
it's capacity, tuples are transferred to the shared queue. Once, all the
tuples are transferred the receiver is sent the notification about the same.

What if, instead of doing this, we switched the shm_mq stuff to use atomics?

That is one of the very first things we have tried, but it didn't show
us any improvement, probably because sending tuple-by-tuple over
shm_mq is not cheap. Also, independently, we have tried to reduce the
frequency of SetLatch (used to notify receiver), but that also didn't
result in improving the results. Now, I think one thing that can be
tried is to use atomics in shm_mq and reduce the frequency to notify
receiver, but not sure if that can give us better results than with
this idea. There are a couple of other ideas which has been tried to
improve the speed of Gather like avoiding an extra copy of tuple which
we need to do before sending tuple
(tqueueReceiveSlot->ExecMaterializeSlot) and increasing the size of
tuple queue length, but none of those has shown any noticeable
improvement. I am aware of all this because I and Dilip were offlist
involved in brainstorming ideas with Rafia to improve the speed of
Gather. I think it might have been better to show the results of
ideas that didn't work out, but I guess Rafia hasn't shared those with
the intuition that nobody would be interested in hearing what didn't
work out.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Alexander Kuzmenkov
a.kuzmenkov@postgrespro.ru
In reply to: Rafia Sabih (#1)
Re: [POC] Faster processing at Gather node

Hi Rafia,

I like the idea of reducing locking overhead by sending tuples in bulk.
The implementation could probably be simpler: you could extend the API
of shm_mq to decouple notifying the sender from actually putting data
into the queue (i.e., make shm_mq_notify_receiver public and make a
variant of shm_mq_sendv that doesn't send the notification). From Amit's
letter I understand that you have already tried something along these
lines and the performance wasn't good. What was the bottleneck then? If
it's the locking around mq_bytes_read/written, it can be rewritten with
atomics. I think it would be great to try this approach because it
doesn't add much code, doesn't add any additional copying and improves
shm_mq performance in general.

--
Alexander Kuzmenkov
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Amit Kapila
amit.kapila16@gmail.com
In reply to: Alexander Kuzmenkov (#4)
Re: [POC] Faster processing at Gather node

On Fri, Sep 8, 2017 at 11:07 PM, Alexander Kuzmenkov
<a.kuzmenkov@postgrespro.ru> wrote:

Hi Rafia,

I like the idea of reducing locking overhead by sending tuples in bulk. The
implementation could probably be simpler: you could extend the API of shm_mq
to decouple notifying the sender from actually putting data into the queue
(i.e., make shm_mq_notify_receiver public and make a variant of shm_mq_sendv
that doesn't send the notification).

Rafia can comment on details, but I would like to bring it to your
notice that we need kind of local buffer (queue) for gathermerge
processing as well where the data needs to be fetched in order from
queues. So, there is always a chance that some of the workers have
filled their queues while waiting for the master to extract the data.
I think the patch posted by Rafia on the nearby thread [1]/messages/by-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS=fHiBJmbSOF74aBQ@mail.gmail.com addresses
both the problems by one patch.

[1]: /messages/by-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS=fHiBJmbSOF74aBQ@mail.gmail.com

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Rafia Sabih
rafia.sabih@enterprisedb.com
In reply to: Amit Kapila (#5)
Re: [POC] Faster processing at Gather node

On Sat, Sep 9, 2017 at 8:14 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Sep 8, 2017 at 11:07 PM, Alexander Kuzmenkov
<a.kuzmenkov@postgrespro.ru> wrote:

Hi Rafia,

I like the idea of reducing locking overhead by sending tuples in bulk. The
implementation could probably be simpler: you could extend the API of shm_mq
to decouple notifying the sender from actually putting data into the queue
(i.e., make shm_mq_notify_receiver public and make a variant of shm_mq_sendv
that doesn't send the notification).

Rafia can comment on details, but I would like to bring it to your
notice that we need kind of local buffer (queue) for gathermerge
processing as well where the data needs to be fetched in order from
queues. So, there is always a chance that some of the workers have
filled their queues while waiting for the master to extract the data.
I think the patch posted by Rafia on the nearby thread [1] addresses
both the problems by one patch.

[1] - /messages/by-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS=fHiBJmbSOF74aBQ@mail.gmail.com

Thanks Alexander for your interest in this work. As rightly pointed
out by Amit, when experimenting with this patch we found that there
are cases when master is busy and unable to read tuples in
shared_queue and the worker get stuck as it can not process tuples any
more. When experimenting aong these lines, I found that Q12 of TPC-H
is showing great performance improvement when increasing
shared_tuple_queue_size [1]/messages/by-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS=fHiBJmbSOF74aBQ@mail.gmail.com.
It was then we realised that merging this with the idea of giving an
illusion of larger tuple queue size with a local queue[1]/messages/by-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS=fHiBJmbSOF74aBQ@mail.gmail.com could be
more beneficial. To precisely explain the meaning of merging the two
ideas, now we write tuples in local_queue once shared_queue is full
and as soon as we have filled some enough tuples in local queue we
copy the tuples from local to shared queue in one memcpy call. It is
giving good performance improvements for quite some cases.

I'll be glad if you may have a look at this patch and enlighten me
with your suggestions. :-)

[1]: /messages/by-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS=fHiBJmbSOF74aBQ@mail.gmail.com

--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Alexander Kuzmenkov
a.kuzmenkov@postgrespro.ru
In reply to: Rafia Sabih (#6)
Re: [POC] Faster processing at Gather node

Thanks Rafia, Amit, now I understand the ideas behind the patch better.
I'll see if I can look at the new one.

--

Alexander Kuzmenkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Andres Freund
andres@anarazel.de
In reply to: Rafia Sabih (#1)
Re: [POC] Faster processing at Gather node

Hi Rafia,

On 2017-05-19 17:25:38 +0530, Rafia Sabih wrote:

head:
explain analyse select * from t where i < 30000000;
QUERY PLAN

Could you share how exactly you generated the data? Just so others can
compare a bit better with your results?

Regards,

Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Rafia Sabih
rafia.sabih@enterprisedb.com
In reply to: Andres Freund (#8)
Re: [POC] Faster processing at Gather node

On Tue, Oct 17, 2017 at 3:22 AM, Andres Freund <andres@anarazel.de> wrote:

Hi Rafia,

On 2017-05-19 17:25:38 +0530, Rafia Sabih wrote:

head:
explain analyse select * from t where i < 30000000;
QUERY PLAN

Could you share how exactly you generated the data? Just so others can
compare a bit better with your results?

Sure. I used generate_series(1, 10000000);
Please find the attached script for the detailed steps.

--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/

Attachments:

large_tbl_gen.sqlapplication/octet-stream; name=large_tbl_gen.sqlDownload
#10Andres Freund
andres@anarazel.de
In reply to: Rafia Sabih (#1)
Re: [POC] Faster processing at Gather node

Hi Everyone,

On 2017-05-19 17:25:38 +0530, Rafia Sabih wrote:

While analysing the performance of TPC-H queries for the newly developed
parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed
that the time taken by gather node is significant. On investigation, as per
the current method it copies each tuple to the shared queue and notifies
the receiver. Since, this copying is done in shared queue, a lot of locking
and latching overhead is there.

So, in this POC patch I tried to copy all the tuples in a local queue thus
avoiding all the locks and latches. Once, the local queue is filled as per
it's capacity, tuples are transferred to the shared queue. Once, all the
tuples are transferred the receiver is sent the notification about the same.

With this patch I could see significant improvement in performance for
simple queries,

I've spent some time looking into this, and I'm not quite convinced this
is the right approach. Let me start by describing where I see the
current performance problems around gather stemming from.

The observations here are made using
select * from t where i < 30000000 offset 29999999 - 1;
with Rafia's data. That avoids slowdowns on the leader due to too many
rows printed out. Sometimes I'll also use
SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET 1000000000 LIMIT 1;
on tpch data to show the effects on wider tables.

The precise query doesn't really matter, the observations here are more
general, I hope.

1) nodeGather.c re-projects every row from workers. As far as I can tell
that's now always exactly the same targetlist as it's coming from the
worker. Projection capability was added in 8538a6307049590 (without
checking whether it's needed afaict), but I think it in turn often
obsoleted by 992b5ba30dcafdc222341505b072a6b009b248a7. My
measurement shows that removing the projection yields quite massive
speedups in queries like yours, which is not too surprising.

I suspect this just needs a tlist_matches_tupdesc check + an if
around ExecProject(). And a test, right now tests pass unless
force_parallel_mode is used even if just commenting out the
projection unconditionally.

before commenting out nodeGather projection:

   rafia time: 8283.583
   rafia profile:
+   30.62%  postgres  postgres             [.] shm_mq_receive
+   18.49%  postgres  postgres             [.] s_lock
+   10.08%  postgres  postgres             [.] SetLatch
-    7.02%  postgres  postgres             [.] slot_deform_tuple
   - slot_deform_tuple
      - 88.01% slot_getsomeattrs
           ExecInterpExpr
           ExecGather
           ExecLimit
   lineitem time: 8448.468
   lineitem profile:
+   24.63%  postgres  postgres             [.] slot_deform_tuple
+   24.43%  postgres  postgres             [.] shm_mq_receive
+   17.36%  postgres  postgres             [.] ExecInterpExpr
+    7.41%  postgres  postgres             [.] s_lock
+    5.73%  postgres  postgres             [.] SetLatch
after:
   rafia time: 6660.224
   rafia profile:
+   36.77%  postgres  postgres              [.] shm_mq_receive
+   19.33%  postgres  postgres              [.] s_lock
+   13.14%  postgres  postgres              [.] SetLatch
+    9.22%  postgres  postgres              [.] AllocSetReset
+    4.27%  postgres  postgres              [.] ExecGather
+    2.79%  postgres  postgres              [.] AllocSetAlloc
   lineitem time: 4507.416
   lineitem profile:
+   34.81%  postgres  postgres            [.] shm_mq_receive
+   15.45%  postgres  postgres            [.] s_lock
+   13.38%  postgres  postgres            [.] SetLatch
+    9.87%  postgres  postgres            [.] AllocSetReset
+    5.82%  postgres  postgres            [.] ExecGather

as quite clearly visible, avoiding the projection yields some major
speedups.

The following analysis here has the projection removed.

2) The spinlocks both on the the sending and receiving side a quite hot:

   rafia query leader:
+   36.16%  postgres  postgres            [.] shm_mq_receive
+   19.49%  postgres  postgres            [.] s_lock
+   13.24%  postgres  postgres            [.] SetLatch

The presence of s_lock shows us that we're clearly often contending
on spinlocks, given that's the slow-path for SpinLockAcquire(). In
shm_mq_receive the instruction profile shows:

│ SpinLockAcquire(&mq->mq_mutex);
│1 5ab: mov $0xa9b580,%ecx
│ mov $0x4a4,%edx
│ mov $0xa9b538,%esi
│ mov %r15,%rdi
│ → callq s_lock
│ ↑ jmpq 2a1
│ tas():
│1 5c7: mov $0x1,%eax
32.83 │ lock xchg %al,(%r15)
│ shm_mq_inc_bytes_read():
│ SpinLockAcquire(&mq->mq_mutex);
and
0.01 │ pop %r15
0.04 │ ← retq
│ nop
│ tas():
│1 338: mov $0x1,%eax
17.59 │ lock xchg %al,(%r15)
│ shm_mq_get_bytes_written():
│ SpinLockAcquire(&mq->mq_mutex);
0.05 │ test %al,%al
0.01 │ ↓ jne 448
│ v = mq->mq_bytes_written;

    rafia query worker:
+   33.00%  postgres  postgres            [.] shm_mq_send_bytes
+    9.90%  postgres  postgres            [.] s_lock
+    7.74%  postgres  postgres            [.] shm_mq_send
+    5.40%  postgres  postgres            [.] ExecInterpExpr
+    5.34%  postgres  postgres            [.] SetLatch

Again, we have strong indicators for a lot of spinlock
contention. The instruction profiles show the same;

shm_mq_send_bytes
│ shm_mq_inc_bytes_written(mq, MAXALIGN(sendnow));
│ and $0xfffffffffffffff8,%r15
│ tas():
0.08 │ mov %ebp,%eax
31.07 │ lock xchg %al,(%r14)
│ shm_mq_inc_bytes_written():
│ * Increment the number of bytes written.
│ */

and

│3 98: cmp %r13,%rbx
0.70 │ ↓ jae 430
│ tas():
0.12 │1 a1: mov %ebp,%eax
28.53 │ lock xchg %al,(%r14)
│ shm_mq_get_bytes_read():
│ SpinLockAcquire(&mq->mq_mutex);
│ test %al,%al
│ ↓ jne 298
│ v = mq->mq_bytes_read;

shm_mq_send:
│ tas():
50.73 │ lock xchg %al,0x0(%r13)
│ shm_mq_notify_receiver():
│ shm_mq_notify_receiver(volatile shm_mq *mq)
│ {
│ PGPROC *receiver;
│ bool detached;

My interpretation here is that it's not just the effect of the
locking causing the slowdown, but to a significant degree the effect
of the implied bus lock.

To test that theory, here are the timings for
1) spinlocks present
time: 6593.045
2) spinlocks acuisition replaced by *full* memory barriers, which on x86 is a lock; addl $0,0(%%rsp)
time: 5159.306
3) spinlocks replaced by read/write barriers as appropriate.
time: 4687.610

By my understanding of shm_mq.c's logic, something like 3) aught to
be doable in a correct manner. There should be, in normal
circumstances, only be one end modifying each of the protected
variables. Doing so instead of using full block atomics with locked
instructions is very likely to yield considerably better performance.

The top-level profile after 3 is:

   leader:
+   25.89%  postgres  postgres          [.] shm_mq_receive
+   24.78%  postgres  postgres          [.] SetLatch
+   14.63%  postgres  postgres          [.] AllocSetReset
+    7.31%  postgres  postgres          [.] ExecGather
   worker:
+   14.02%  postgres  postgres            [.] ExecInterpExpr
+   11.63%  postgres  postgres            [.] shm_mq_send_bytes
+   11.25%  postgres  postgres            [.] heap_getnext
+    6.78%  postgres  postgres            [.] SetLatch
+    6.38%  postgres  postgres            [.] slot_deform_tuple

still a lot of cycles in the queue code, but proportionally less.

4) Modulo computations in shm_mq are expensive:

│ shm_mq_send_bytes():
│ Size offset = mq->mq_bytes_written % (uint64) ringsize;
0.12 │1 70: xor %edx,%edx
│ Size sendnow = Min(available, ringsize - offset);
│ mov %r12,%rsi
│ Size offset = mq->mq_bytes_written % (uint64) ringsize;
43.75 │ div %r12
│ memcpy(&mq->mq_ring[mq->mq_ring_offset + offset],
7.23 │ movzbl 0x31(%r15),%eax

│ shm_mq_receive_bytes():
│ used = written - mq->mq_bytes_read;
1.17 │ sub %rax,%rcx
│ offset = mq->mq_bytes_read % (uint64) ringsize;
18.49 │ div %rbp
│ mov %rdx,%rdi

that we end up with a full blown div makes sense - the compiler can't
know anything about ringsize, therefore it can't optimize to a mask.
I think we should force the size of the ringbuffer to be a power of
two, and use a maks instead of a size for the buffer.

5) There's a *lot* of latch interactions. The biggest issue actually is
the memory barrier implied by a SetLatch, waiting for the latch
barely shows up.

from 4) above:

   leader:
+   25.89%  postgres  postgres          [.] shm_mq_receive
+   24.78%  postgres  postgres          [.] SetLatch
+   14.63%  postgres  postgres          [.] AllocSetReset
+    7.31%  postgres  postgres          [.] ExecGather

│ 0000000000781b10 <SetLatch>:
│ SetLatch():
│ /*
│ * The memory barrier has to be placed here to ensure that any flag
│ * variables possibly changed by this process have been flushed to main
│ * memory, before we check/set is_set.
│ */
│ pg_memory_barrier();
77.43 │ lock addl $0x0,(%rsp)

│ /* Quick exit if already set */
│ if (latch->is_set)
0.12 │ mov (%rdi),%eax

Commenting out the memory barrier - which is NOT CORRECT - improves
timing:
before: 4675.626
after: 4125.587

The correct fix obviously is not to break latch correctness. I think
the big problem here is that we perform a SetLatch() for every read
from the latch.

I think we should
a) introduce a batch variant for receiving like:

extern shm_mq_result shm_mq_receivev(shm_mq_handle *mqh,
shm_mq_iovec *iov, int *iovcnt,
bool nowait)

which then only does the SetLatch() at the end. And maybe if it
wrapped around.

b) Use shm_mq_sendv in tqueue.c by batching up insertions into the
queue whenever it's not empty when a tuple is ready.

I've not benchmarked this, but I'm pretty certain that the benefits
isn't just going to be reduced cost of SetLatch() calls, but also
increased performance due to fewer context switches

6) I've observed, using strace, debug outputs with timings, and top with
a short interval, that quite often only one backend has sufficient
work, while other backends are largely idle.

I think the problem here is that the "anti round robin" provisions from
bc7fcab5e36b9597857, while much better than the previous state, have
swung a bit too far into the other direction. Especially if we were
to introduce batching as I suggest in 5), but even without, this
leads to back-fort on half-empty queues between the gatherstate->nextreader
backend, and the leader.

I'm not 100% certain what the right fix here is.

One fairly drastic solution would be to move away from a
single-produce-single-consumer style, per worker, queue, to a global
queue. That'd avoid fairness issues between the individual workers,
at the price of potential added contention. One disadvantage is that
such a combined queue approach is not easily applicable for gather
merge.

One less drastic approach would be to move to try to drain the queue
fully in one batch, and then move to the next queue. That'd avoid
triggering a small wakeups for each individual tuple, as one
currently would get without the 'stickyness'.

It might also be a good idea to use a more directed form of wakeups,
e.g. using a per-worker latch + a wait event set, to avoid iterating
over all workers.

Unfortunately the patch's "local worker queue" concept seems, to me,
like it's not quite addressing the structural issues, instead opting to
address them by adding another layer of queuing. I suspect that if we'd
go for the above solutions there'd be only very small benefit in
implementing such per-worker local queues.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Andres Freund
andres@anarazel.de
In reply to: Andres Freund (#10)
Re: [POC] Faster processing at Gather node

Hi,

On 2017-10-17 14:39:57 -0700, Andres Freund wrote:

I've spent some time looking into this, and I'm not quite convinced this
is the right approach. Let me start by describing where I see the
current performance problems around gather stemming from.

One further approach to several of these issues could also be to change
things a bit more radically:

Instead of the current shm_mq + tqueue.c, have a drastically simpler
queue, that just stores fixed width dsa_pointers. Dealing with that
queue will be quite a bit faster. In that queue one would store dsa.c
managed pointers to tuples.

One thing that makes that attractive is that that'd move a bunch of
copying in the leader process solely to the worker processes, because
the leader could just convert the dsa_pointer into a local pointer and
hand that upwards the execution tree.

We'd possibly need some halfway clever way to reuse dsa allocations, but
that doesn't seem impossible.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#10)
Re: [POC] Faster processing at Gather node

On Tue, Oct 17, 2017 at 5:39 PM, Andres Freund <andres@anarazel.de> wrote:

The precise query doesn't really matter, the observations here are more
general, I hope.

1) nodeGather.c re-projects every row from workers. As far as I can tell
that's now always exactly the same targetlist as it's coming from the
worker. Projection capability was added in 8538a6307049590 (without
checking whether it's needed afaict), but I think it in turn often
obsoleted by 992b5ba30dcafdc222341505b072a6b009b248a7. My
measurement shows that removing the projection yields quite massive
speedups in queries like yours, which is not too surprising.

That seems like an easy and worthwhile optimization.

I suspect this just needs a tlist_matches_tupdesc check + an if
around ExecProject(). And a test, right now tests pass unless
force_parallel_mode is used even if just commenting out the
projection unconditionally.

So, for this to fail, we'd need a query that uses parallelism but
where the target list contains a parallel-restricted function. Also,
the function should really be such that we'll reliably get a failure
rather than only with some small probability. I'm not quite sure how
to put together such a test case, but there's probably some way.

2) The spinlocks both on the the sending and receiving side a quite hot:

rafia query leader:
+   36.16%  postgres  postgres            [.] shm_mq_receive
+   19.49%  postgres  postgres            [.] s_lock
+   13.24%  postgres  postgres            [.] SetLatch

To test that theory, here are the timings for
1) spinlocks present
time: 6593.045
2) spinlocks acuisition replaced by *full* memory barriers, which on x86 is a lock; addl $0,0(%%rsp)
time: 5159.306
3) spinlocks replaced by read/write barriers as appropriate.
time: 4687.610

By my understanding of shm_mq.c's logic, something like 3) aught to
be doable in a correct manner. There should be, in normal
circumstances, only be one end modifying each of the protected
variables. Doing so instead of using full block atomics with locked
instructions is very likely to yield considerably better performance.

I think the sticking point here will be the handling of the
mq_detached flag. I feel like I fixed a bug at some point where this
had to be checked or set under the lock at the same time we were
checking or setting mq_bytes_read and/or mq_bytes_written, but I don't
remember the details. I like the idea, though.

Not sure what happened to #3 on your list... you went from #2 to #4.

4) Modulo computations in shm_mq are expensive:

that we end up with a full blown div makes sense - the compiler can't
know anything about ringsize, therefore it can't optimize to a mask.
I think we should force the size of the ringbuffer to be a power of
two, and use a maks instead of a size for the buffer.

This seems like it would require some redesign. Right now we let the
caller choose any size they want and subtract off our header size to
find the actual queue size. We can waste up to MAXALIGN-1 bytes, but
that's not much. This would waste up to half the bytes provided,
which is probably not cool.

5) There's a *lot* of latch interactions. The biggest issue actually is
the memory barrier implied by a SetLatch, waiting for the latch
barely shows up.

Commenting out the memory barrier - which is NOT CORRECT - improves
timing:
before: 4675.626
after: 4125.587

The correct fix obviously is not to break latch correctness. I think
the big problem here is that we perform a SetLatch() for every read
from the latch.

I think it's a little bit of an overstatement to say that commenting
out the memory barrier is not correct. When we added that code, we
removed this comment:

- * Presently, when using a shared latch for interprocess signalling, the
- * flag variable(s) set by senders and inspected by the wait loop must
- * be protected by spinlocks or LWLocks, else it is possible to miss events
- * on machines with weak memory ordering (such as PPC). This restriction
- * will be lifted in future by inserting suitable memory barriers into
- * SetLatch and ResetLatch.

It seems to me that in any case where the data is protected by an
LWLock, the barriers we've added to SetLatch and ResetLatch are
redundant. I'm not sure if that's entirely true in the spinlock case,
because S_UNLOCK() is only documented to have release semantics, so
maybe the load of latch->is_set could be speculated before the lock is
dropped. But I do wonder if we're just multiplying barriers endlessly
instead of trying to think of ways to minimize them (e.g. have a
variant of SpinLockRelease that acts as a full barrier instead of a
release barrier, and then avoid a second barrier when checking the
latch state).

All that having been said, a batch variant for reading tuples in bulk
might make sense. I think when I originally wrote this code I was
thinking that one process might be filling the queue while another
process was draining it, and that it might therefore be important to
free up space as early as possible. But maybe that's not a very good
intuition.

b) Use shm_mq_sendv in tqueue.c by batching up insertions into the
queue whenever it's not empty when a tuple is ready.

Batching them with what? I hate to postpone sending tuples we've got;
that sounds nice in the case where we're sending tons of tuples at
high speed, but there might be other cases where it makes the leader
wait.

6) I've observed, using strace, debug outputs with timings, and top with
a short interval, that quite often only one backend has sufficient
work, while other backends are largely idle.

Doesn't that just mean we're bad at choosing how man workers to use?
If one worker can't outrun the leader, it's better to have the other
workers sleep and keep one the one lucky worker on CPU than to keep
context switching. Or so I would assume.

One fairly drastic solution would be to move away from a
single-produce-single-consumer style, per worker, queue, to a global
queue. That'd avoid fairness issues between the individual workers,
at the price of potential added contention. One disadvantage is that
such a combined queue approach is not easily applicable for gather
merge.

It might also lead to more contention.

One less drastic approach would be to move to try to drain the queue
fully in one batch, and then move to the next queue. That'd avoid
triggering a small wakeups for each individual tuple, as one
currently would get without the 'stickyness'.

I don't know if that is better but it seems worth a try.

It might also be a good idea to use a more directed form of wakeups,
e.g. using a per-worker latch + a wait event set, to avoid iterating
over all workers.

I don't follow.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#13Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#12)
Re: [POC] Faster processing at Gather node

Hi,

On 2017-10-18 15:46:39 -0400, Robert Haas wrote:

2) The spinlocks both on the the sending and receiving side a quite hot:

rafia query leader:
+   36.16%  postgres  postgres            [.] shm_mq_receive
+   19.49%  postgres  postgres            [.] s_lock
+   13.24%  postgres  postgres            [.] SetLatch

To test that theory, here are the timings for
1) spinlocks present
time: 6593.045
2) spinlocks acuisition replaced by *full* memory barriers, which on x86 is a lock; addl $0,0(%%rsp)
time: 5159.306
3) spinlocks replaced by read/write barriers as appropriate.
time: 4687.610

By my understanding of shm_mq.c's logic, something like 3) aught to
be doable in a correct manner. There should be, in normal
circumstances, only be one end modifying each of the protected
variables. Doing so instead of using full block atomics with locked
instructions is very likely to yield considerably better performance.

I think the sticking point here will be the handling of the
mq_detached flag. I feel like I fixed a bug at some point where this
had to be checked or set under the lock at the same time we were
checking or setting mq_bytes_read and/or mq_bytes_written, but I don't
remember the details. I like the idea, though.

Hm. I'm a bit confused/surprised by that. What'd be the worst that can
happen if we don't immediately detect that the other side is detached?
At least if we only do so in the non-blocking paths, the worst that
seems that could happen is that we read/write at most one superflous
queue entry, rather than reporting an error? Or is the concern that
detaching might be detected *too early*, before reading the last entry
from a queue?

Not sure what happened to #3 on your list... you went from #2 to #4.

Threes are boring.

4) Modulo computations in shm_mq are expensive:

that we end up with a full blown div makes sense - the compiler can't
know anything about ringsize, therefore it can't optimize to a mask.
I think we should force the size of the ringbuffer to be a power of
two, and use a maks instead of a size for the buffer.

This seems like it would require some redesign. Right now we let the
caller choose any size they want and subtract off our header size to
find the actual queue size. We can waste up to MAXALIGN-1 bytes, but
that's not much. This would waste up to half the bytes provided,
which is probably not cool.

Yea, I think it'd require a shm_mq_estimate_size(Size queuesize), that
returns the next power-of-two queuesize + overhead.

5) There's a *lot* of latch interactions. The biggest issue actually is
the memory barrier implied by a SetLatch, waiting for the latch
barely shows up.

Commenting out the memory barrier - which is NOT CORRECT - improves
timing:
before: 4675.626
after: 4125.587

The correct fix obviously is not to break latch correctness. I think
the big problem here is that we perform a SetLatch() for every read
from the latch.

I think it's a little bit of an overstatement to say that commenting
out the memory barrier is not correct. When we added that code, we
removed this comment:

- * Presently, when using a shared latch for interprocess signalling, the
- * flag variable(s) set by senders and inspected by the wait loop must
- * be protected by spinlocks or LWLocks, else it is possible to miss events
- * on machines with weak memory ordering (such as PPC). This restriction
- * will be lifted in future by inserting suitable memory barriers into
- * SetLatch and ResetLatch.

It seems to me that in any case where the data is protected by an
LWLock, the barriers we've added to SetLatch and ResetLatch are
redundant. I'm not sure if that's entirely true in the spinlock case,
because S_UNLOCK() is only documented to have release semantics, so
maybe the load of latch->is_set could be speculated before the lock is
dropped. But I do wonder if we're just multiplying barriers endlessly
instead of trying to think of ways to minimize them (e.g. have a
variant of SpinLockRelease that acts as a full barrier instead of a
release barrier, and then avoid a second barrier when checking the
latch state).

I'm not convinced by this. Imo the multiplying largely comes from
superflous actions, like the per-entry SetLatch calls here, rather than
per batch.

After all I'd benchmarked this *after* an experimental conversion of
shm_mq to not use spinlocks - so there's actually no external barrier
providing these guarantees that could be combined with SetLatch()'s
barrier.

Presumably part of the cost here comes from the fact that the barriers
actually do have an influence over the ordering.

All that having been said, a batch variant for reading tuples in bulk
might make sense. I think when I originally wrote this code I was
thinking that one process might be filling the queue while another
process was draining it, and that it might therefore be important to
free up space as early as possible. But maybe that's not a very good
intuition.

I think that's a sensible goal, but I don't think that has to mean that
the draining has to happen after every entry. If you'd e.g. have a
shm_mq_receivev() with 16 iovecs, that'd commonly be only part of a
single tqueue mq (unless your tuples are > 4k). I'll note that afaict
shm_mq_sendv() already batches its SetLatch() calls.

I think one important thing a batched drain can avoid is that a worker
awakes to just put one new tuple into the queue and then sleep
again. That's kinda expensive.

b) Use shm_mq_sendv in tqueue.c by batching up insertions into the
queue whenever it's not empty when a tuple is ready.

Batching them with what? I hate to postpone sending tuples we've got;
that sounds nice in the case where we're sending tons of tuples at
high speed, but there might be other cases where it makes the leader
wait.

Yea, that'd need some smarts. How about doing something like batching up
locally as long as the queue contains less than one average sized batch?

6) I've observed, using strace, debug outputs with timings, and top with
a short interval, that quite often only one backend has sufficient
work, while other backends are largely idle.

Doesn't that just mean we're bad at choosing how man workers to use?
If one worker can't outrun the leader, it's better to have the other
workers sleep and keep one the one lucky worker on CPU than to keep
context switching. Or so I would assume.

No, I don't think that's necesarily true. If that worker's queue is full
every-time, then yes. But I think a common scenario is that the
"current" worker only has a small portion of the queue filled. Draining
that immediately just leads to increased cacheline bouncing.

I'd not previously thought about this, but won't staying sticky to the
current worker potentially cause pins on individual tuples be held for a
potentially long time by workers not making any progress?

It might also be a good idea to use a more directed form of wakeups,
e.g. using a per-worker latch + a wait event set, to avoid iterating
over all workers.

I don't follow.

Well, right now we're busily checking each worker's queue. That's fine
with a handful of workers, but starts to become not that cheap pretty
soon afterwards. In the surely common case where the workers are the
bottleneck (because that's when parallelism is worthwhile), we'll check
each worker's queue once one of them returned a single tuple. It'd not
be a stupid idea to have a per-worker latch and wait for the latches of
all workers at once. Then targetedly drain the queues of the workers
that WaitEventSetWait(nevents = nworkers) signalled as ready.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#13)
Re: [POC] Faster processing at Gather node

On Wed, Oct 18, 2017 at 4:30 PM, Andres Freund <andres@anarazel.de> wrote:

Hm. I'm a bit confused/surprised by that. What'd be the worst that can
happen if we don't immediately detect that the other side is detached?
At least if we only do so in the non-blocking paths, the worst that
seems that could happen is that we read/write at most one superflous
queue entry, rather than reporting an error? Or is the concern that
detaching might be detected *too early*, before reading the last entry
from a queue?

Detaching too early is definitely a problem. A worker is allowed to
start up, write all of its results into the queue, and then detach
without waiting for the leader to read those results. (Reading
messages that weren't really written would be a problem too, of
course.)

I'm not convinced by this. Imo the multiplying largely comes from
superflous actions, like the per-entry SetLatch calls here, rather than
per batch.

After all I'd benchmarked this *after* an experimental conversion of
shm_mq to not use spinlocks - so there's actually no external barrier
providing these guarantees that could be combined with SetLatch()'s
barrier.

OK.

All that having been said, a batch variant for reading tuples in bulk
might make sense. I think when I originally wrote this code I was
thinking that one process might be filling the queue while another
process was draining it, and that it might therefore be important to
free up space as early as possible. But maybe that's not a very good
intuition.

I think that's a sensible goal, but I don't think that has to mean that
the draining has to happen after every entry. If you'd e.g. have a
shm_mq_receivev() with 16 iovecs, that'd commonly be only part of a
single tqueue mq (unless your tuples are > 4k). I'll note that afaict
shm_mq_sendv() already batches its SetLatch() calls.

But that's a little different -- shm_mq_sendv() sends one message
collected from multiple places. There's no more reason for it to wake
up the receiver before the whole message is written that there would
be for shm_mq_send(); it's the same problem.

I think one important thing a batched drain can avoid is that a worker
awakes to just put one new tuple into the queue and then sleep
again. That's kinda expensive.

Yes. Or - part of a tuple, which is worse.

b) Use shm_mq_sendv in tqueue.c by batching up insertions into the
queue whenever it's not empty when a tuple is ready.

Batching them with what? I hate to postpone sending tuples we've got;
that sounds nice in the case where we're sending tons of tuples at
high speed, but there might be other cases where it makes the leader
wait.

Yea, that'd need some smarts. How about doing something like batching up
locally as long as the queue contains less than one average sized batch?

I don't follow.

No, I don't think that's necesarily true. If that worker's queue is full
every-time, then yes. But I think a common scenario is that the
"current" worker only has a small portion of the queue filled. Draining
that immediately just leads to increased cacheline bouncing.

Hmm, OK.

I'd not previously thought about this, but won't staying sticky to the
current worker potentially cause pins on individual tuples be held for a
potentially long time by workers not making any progress?

Yes.

It might also be a good idea to use a more directed form of wakeups,
e.g. using a per-worker latch + a wait event set, to avoid iterating
over all workers.

I don't follow.

Well, right now we're busily checking each worker's queue. That's fine
with a handful of workers, but starts to become not that cheap pretty
soon afterwards. In the surely common case where the workers are the
bottleneck (because that's when parallelism is worthwhile), we'll check
each worker's queue once one of them returned a single tuple. It'd not
be a stupid idea to have a per-worker latch and wait for the latches of
all workers at once. Then targetedly drain the queues of the workers
that WaitEventSetWait(nevents = nworkers) signalled as ready.

Hmm, interesting. But we can't completely ignore the process latch
either, since among other things a worker erroring out and propagating
the error to the leader relies on that.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#10)
Re: [POC] Faster processing at Gather node

On Wed, Oct 18, 2017 at 3:09 AM, Andres Freund <andres@anarazel.de> wrote:

Hi Everyone,

On 2017-05-19 17:25:38 +0530, Rafia Sabih wrote:

While analysing the performance of TPC-H queries for the newly developed
parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed
that the time taken by gather node is significant. On investigation, as per
the current method it copies each tuple to the shared queue and notifies
the receiver. Since, this copying is done in shared queue, a lot of locking
and latching overhead is there.

So, in this POC patch I tried to copy all the tuples in a local queue thus
avoiding all the locks and latches. Once, the local queue is filled as per
it's capacity, tuples are transferred to the shared queue. Once, all the
tuples are transferred the receiver is sent the notification about the same.

With this patch I could see significant improvement in performance for
simple queries,

I've spent some time looking into this, and I'm not quite convinced this
is the right approach.

As per my understanding, the patch in this thread is dead (not
required) after the patch posted by Rafia in thread "Effect of
changing the value for PARALLEL_TUPLE_QUEUE_SIZE" [1]/messages/by-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS=fHiBJmbSOF74aBQ@mail.gmail.com. There seem to
be two related problems in this area, first is shm queue communication
overhead and second is workers started to stall when shm queue gets
full. We can observe the first problem in simple queries that use
gather and second in gather merge kind of scenarios. We are trying to
resolve both the problems with the patch posted in another thread. I
think there is some similarity with the patch posted on this thread,
but it is different. I have mentioned something similar up thread as
well.

Let me start by describing where I see the
current performance problems around gather stemming from.

The observations here are made using
select * from t where i < 30000000 offset 29999999 - 1;
with Rafia's data. That avoids slowdowns on the leader due to too many
rows printed out. Sometimes I'll also use
SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET 1000000000 LIMIT 1;
on tpch data to show the effects on wider tables.

The precise query doesn't really matter, the observations here are more
general, I hope.

1) nodeGather.c re-projects every row from workers. As far as I can tell
that's now always exactly the same targetlist as it's coming from the
worker. Projection capability was added in 8538a6307049590 (without
checking whether it's needed afaict), but I think it in turn often
obsoleted by 992b5ba30dcafdc222341505b072a6b009b248a7. My
measurement shows that removing the projection yields quite massive
speedups in queries like yours, which is not too surprising.

I suspect this just needs a tlist_matches_tupdesc check + an if
around ExecProject(). And a test, right now tests pass unless
force_parallel_mode is used even if just commenting out the
projection unconditionally.

+1. I think we should something to avoid this.

Commenting out the memory barrier - which is NOT CORRECT - improves
timing:
before: 4675.626
after: 4125.587

The correct fix obviously is not to break latch correctness. I think
the big problem here is that we perform a SetLatch() for every read
from the latch.

I think we should
a) introduce a batch variant for receiving like:

extern shm_mq_result shm_mq_receivev(shm_mq_handle *mqh,
shm_mq_iovec *iov, int *iovcnt,
bool nowait)

which then only does the SetLatch() at the end. And maybe if it
wrapped around.

b) Use shm_mq_sendv in tqueue.c by batching up insertions into the
queue whenever it's not empty when a tuple is ready.

I think the patch posted in another thread has tried to achieve such a
batching by using local queues, maybe we should try some other way.

Unfortunately the patch's "local worker queue" concept seems, to me,
like it's not quite addressing the structural issues, instead opting to
address them by adding another layer of queuing.

That is done to use batching the tuples while sending them. Sure, we
can do some of the other things as well, but I think the main
advantage is from batching the tuples in a smart way while sending
them and once that is done, we might not need many of the other
optimizations.

[1]: /messages/by-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS=fHiBJmbSOF74aBQ@mail.gmail.com

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#12)
Re: [POC] Faster processing at Gather node

On Thu, Oct 19, 2017 at 1:16 AM, Robert Haas <robertmhaas@gmail.com> wrote:

On Tue, Oct 17, 2017 at 5:39 PM, Andres Freund <andres@anarazel.de> wrote:

b) Use shm_mq_sendv in tqueue.c by batching up insertions into the
queue whenever it's not empty when a tuple is ready.

Batching them with what? I hate to postpone sending tuples we've got;
that sounds nice in the case where we're sending tons of tuples at
high speed, but there might be other cases where it makes the leader
wait.

I think Rafia's latest patch on the thread [1]/messages/by-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS=fHiBJmbSOF74aBQ@mail.gmail.com has done something
similar where the tuples are sent till there is a space in shared
memory queue and then turn to batching the tuples using local queues.

[1]: /messages/by-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS=fHiBJmbSOF74aBQ@mail.gmail.com

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#17Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#10)
Re: [POC] Faster processing at Gather node

On Wed, Oct 18, 2017 at 3:09 AM, Andres Freund <andres@anarazel.de> wrote:

2) The spinlocks both on the the sending and receiving side a quite hot:

rafia query leader:
+   36.16%  postgres  postgres            [.] shm_mq_receive
+   19.49%  postgres  postgres            [.] s_lock
+   13.24%  postgres  postgres            [.] SetLatch

Here's a patch which, as per an off-list discussion between Andres,
Amit, and myself, removes the use of the spinlock for most
send/receive operations in favor of memory barriers and the atomics
support for 8-byte reads and writes. I tested with a pgbench -i -s
300 database with pgbench_accounts_pkey dropped and
max_parallel_workers_per_gather boosted to 10. I used this query:

select aid, count(*) from pgbench_accounts group by 1 having count(*) > 1;

which produces this plan:

Finalize GroupAggregate (cost=1235865.51..5569468.75 rows=10000000 width=12)
Group Key: aid
Filter: (count(*) > 1)
-> Gather Merge (cost=1235865.51..4969468.75 rows=30000000 width=12)
Workers Planned: 6
-> Partial GroupAggregate (cost=1234865.42..1322365.42
rows=5000000 width=12)
Group Key: aid
-> Sort (cost=1234865.42..1247365.42 rows=5000000 width=4)
Sort Key: aid
-> Parallel Seq Scan on pgbench_accounts
(cost=0.00..541804.00 rows=5000000 width=4)
(10 rows)

On hydra (PPC), these changes didn't help much. Timings:

master: 29605.582, 29753.417, 30160.485
patch: 28218.396, 27986.951, 26465.584

That's about a 5-6% improvement. On my MacBook, though, the
improvement was quite a bit more:

master: 21436.745, 20978.355, 19918.617
patch: 15896.573, 15880.652, 15967.176

Median-to-median, that's about a 24% improvement.

Any reviews appreciated.

Thanks,

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachments:

shm-mq-less-spinlocks-v1.2.patchapplication/octet-stream; name=shm-mq-less-spinlocks-v1.2.patchDownload+116-121
#18Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#17)
Re: [POC] Faster processing at Gather node

Hi,

On 2017-11-04 16:38:31 +0530, Robert Haas wrote:

On hydra (PPC), these changes didn't help much. Timings:

master: 29605.582, 29753.417, 30160.485
patch: 28218.396, 27986.951, 26465.584

That's about a 5-6% improvement. On my MacBook, though, the
improvement was quite a bit more:

Hm. I wonder why that is. Random unverified theories (this plane doesn't
have power supplies for us mere mortals in coach, therefore I'm not
going to run benchmarks):

- Due to the lower per-core performance the leader backend is so
bottlenecked that there's just not a whole lot of
contention. Therefore removing the lock doesn't help much. That might
be a bit different if the redundant projection is removed.
- IO performance on hydra is notoriously bad so there might just not be
enough data available for workers to process rows fast enough to cause
contention.

master: 21436.745, 20978.355, 19918.617
patch: 15896.573, 15880.652, 15967.176

Median-to-median, that's about a 24% improvement.

Neat!

- * mq_detached can be set by either the sender or the receiver, so the mutex
- * must be held to read or write it.  Memory barriers could be used here as
- * well, if needed.
+ * mq_bytes_read and mq_bytes_written are not protected by the mutex.  Instead,
+ * they are written atomically using 8 byte loads and stores.  Memory barriers
+ * must be carefully used to synchronize reads and writes of these values with
+ * reads and writes of the actual data in mq_ring.

Maybe mention that there's a fallback for ancient platforms?

@@ -900,15 +921,12 @@ shm_mq_send_bytes(shm_mq_handle *mqh, Size nbytes, const void *data,
}
else if (available == 0)
{
-			shm_mq_result res;
-
-			/* Let the receiver know that we need them to read some data. */
-			res = shm_mq_notify_receiver(mq);
-			if (res != SHM_MQ_SUCCESS)
-			{
-				*bytes_written = sent;
-				return res;
-			}
+			/*
+			 * Since mq->mqh_counterparty_attached is known to be true at this
+			 * point, mq_receiver has been set, and it can't change once set.
+			 * Therefore, we can read it without acquiring the spinlock.
+			 */
+			SetLatch(&mq->mq_receiver->procLatch);

Might not hurt to assert mqh_counterparty_attached, just for slightly
easier debugging.

@@ -983,19 +1009,27 @@ shm_mq_receive_bytes(shm_mq *mq, Size bytes_needed, bool nowait,
for (;;)
{
Size		offset;
-		bool		detached;
+		uint64		read;
/* Get bytes written, so we can compute what's available to read. */
-		written = shm_mq_get_bytes_written(mq, &detached);
-		used = written - mq->mq_bytes_read;
+		written = pg_atomic_read_u64(&mq->mq_bytes_written);
+		read = pg_atomic_read_u64(&mq->mq_bytes_read);

Theoretically we don't actually need to re-read this from shared memory,
we could just have the information in the local memory too. Right?
Doubtful however that it's important enough to bother given that we've
to move the cacheline for `mq_bytes_written` anyway, and will later also
dirty it to *update* `mq_bytes_read`. Similarly on the write side.

-/*
* Increment the number of bytes read.
*/
static void
@@ -1157,63 +1164,51 @@ shm_mq_inc_bytes_read(volatile shm_mq *mq, Size n)
{
PGPROC *sender;

-	SpinLockAcquire(&mq->mq_mutex);
-	mq->mq_bytes_read += n;
+	/*
+	 * Separate prior reads of mq_ring from the increment of mq_bytes_read
+	 * which follows.  Pairs with the full barrier in shm_mq_send_bytes().
+	 * We only need a read barrier here because the increment of mq_bytes_read
+	 * is actually a read followed by a dependent write.
+	 */
+	pg_read_barrier();
+
+	/*
+	 * There's no need to use pg_atomic_fetch_add_u64 here, because nobody
+	 * else can be changing this value.  This method avoids taking the bus
+	 * lock unnecessarily.
+	 */

s/the bus lock/a bus lock/? Might also be worth rephrasing away from
bus locks - there's a lot of different ways atomics are implemented.

/*
- * Get the number of bytes written. The sender need not use this to access
- * the count of bytes written, but the receiver must.
- */
-static uint64
-shm_mq_get_bytes_written(volatile shm_mq *mq, bool *detached)
-{
- uint64 v;
-
- SpinLockAcquire(&mq->mq_mutex);
- v = mq->mq_bytes_written;
- *detached = mq->mq_detached;
- SpinLockRelease(&mq->mq_mutex);
-
- return v;
-}

You reference this function in a comment elsewhere:

+	/*
+	 * Separate prior reads of mq_ring from the write of mq_bytes_written
+	 * which we're about to do.  Pairs with shm_mq_get_bytes_written's read
+	 * barrier.
+	 */
+	pg_write_barrier();

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#18)
Re: [POC] Faster processing at Gather node

On Sat, Nov 4, 2017 at 5:55 PM, Andres Freund <andres@anarazel.de> wrote:

master: 21436.745, 20978.355, 19918.617
patch: 15896.573, 15880.652, 15967.176

Median-to-median, that's about a 24% improvement.

Neat!

With the attached stack of 4 patches, I get: 10811.768 ms, 10743.424
ms, 10632.006 ms, about a 49% improvement median-to-median. Haven't
tried it on hydra or any other test cases yet.

skip-gather-project-v1.patch does what it says on the tin. I still
don't have a test case for this, and I didn't find that it helped very
much, but it would probably help more in a test case with more
columns, and you said this looked like a big bottleneck in your
testing, so here you go.

shm-mq-less-spinlocks-v2.patch is updated from the version I posted
before based on your review comments. I don't think it's really
necessary to mention that the 8-byte atomics have fallbacks here;
whatever needs to be said about that should be said in some central
place that talks about atomics, not in each user individually. I
agree that there might be some further speedups possible by caching
some things in local memory, but I haven't experimented with that.

shm-mq-reduce-receiver-latch-set-v1.patch causes the receiver to only
consume input from the shared queue when the amount of unconsumed
input exceeds 1/4 of the queue size. This caused a large performance
improvement in my testing because it causes the number of times the
latch gets set to drop dramatically. I experimented a bit with
thresholds of 1/8 and 1/2 before setting on 1/4; 1/4 seems to be
enough to capture most of the benefit.

remove-memory-leak-protection-v1.patch removes the memory leak
protection that Tom installed upon discovering that the original
version of tqueue.c leaked memory like crazy. I think that it
shouldn't do that any more, courtesy of
6b65a7fe62e129d5c2b85cd74d6a91d8f7564608. Assuming that's correct, we
can avoid a whole lot of tuple copying in Gather Merge and a much more
modest amount of overhead in Gather. Since my test case exercised
Gather Merge, this bought ~400 ms or so.

Even with all of these patches applied, there's clearly still room for
more optimization, but MacOS's "sample" profiler seems to show that
the bottlenecks are largely shifting elsewhere:

Sort by top of stack, same collapsed (when >= 5):
slot_getattr (in postgres) 706
slot_deform_tuple (in postgres) 560
ExecAgg (in postgres) 378
ExecInterpExpr (in postgres) 372
AllocSetAlloc (in postgres) 319
_platform_memmove$VARIANT$Haswell (in
libsystem_platform.dylib) 314
read (in libsystem_kernel.dylib) 303
heap_compare_slots (in postgres) 296
combine_aggregates (in postgres) 273
shm_mq_receive_bytes (in postgres) 272

I'm probably not super-excited about spending too much more time
trying to make the _platform_memmove time (only 20% or so of which
seems to be due to the shm_mq stuff) or the shm_mq_receive_bytes time
go down until, say, somebody JIT's slot_getattr and slot_deform_tuple.
:-)

One thing that might be worth doing is hammering on the AllocSetAlloc
time. I think that's largely caused by allocating space for heap
tuples and then freeing them and allocating space for new heap tuples.
Gather/Gather Merge are guilty of that, but I think there may be other
places in the executor with the same issue. Maybe we could have
fixed-size buffers for small tuples that just get reused and only
palloc for large tuples (cf. SLAB_SLOT_SIZE).

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachments:

skip-gather-project-v1.patchapplication/octet-stream; name=skip-gather-project-v1.patchDownload+110-87
shm-mq-less-spinlocks-v2.patchapplication/octet-stream; name=shm-mq-less-spinlocks-v2.patchDownload+116-122
shm-mq-reduce-receiver-latch-set-v1.patchapplication/octet-stream; name=shm-mq-reduce-receiver-latch-set-v1.patchDownload+43-27
remove-memory-leak-protection-v1.patchapplication/octet-stream; name=remove-memory-leak-protection-v1.patchDownload+5-22
#20Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#19)
Re: [POC] Faster processing at Gather node

On 2017-11-05 01:05:59 +0100, Robert Haas wrote:

skip-gather-project-v1.patch does what it says on the tin. I still
don't have a test case for this, and I didn't find that it helped very
much, but it would probably help more in a test case with more
columns, and you said this looked like a big bottleneck in your
testing, so here you go.

The query where that showed a big benefit was

SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET 1000000000 LIMIT 1;

(i.e a not very selective filter, and then just throwing the results away)

still shows quite massive benefits:

before:
set parallel_setup_cost=0;set parallel_tuple_cost=0;set min_parallel_table_scan_size=0;set max_parallel_workers_per_gather=8;
tpch_5[17938][1]=# explain analyze SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET 1000000000 LIMIT 1;
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ QUERY PLAN
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ Limit (cost=635802.67..635802.69 rows=1 width=127) (actual time=8675.097..8675.097 rows=0 loops=1)
│ -> Gather (cost=0.00..635802.67 rows=27003243 width=127) (actual time=0.289..7904.849 rows=26989780 loops=1)
│ Workers Planned: 8
│ Workers Launched: 7
│ -> Parallel Seq Scan on lineitem (cost=0.00..635802.67 rows=3375405 width=127) (actual time=0.124..528.667 rows=3373722 loops=8)
│ Filter: (l_suppkey > 5012)
│ Rows Removed by Filter: 376252
│ Planning time: 0.098 ms
│ Execution time: 8676.125 ms
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
(9 rows)
after:
tpch_5[19754][1]=# EXPLAIN ANALYZE SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET 1000000000 LIMIT 1;
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ QUERY PLAN
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ Limit (cost=635802.67..635802.69 rows=1 width=127) (actual time=5984.916..5984.916 rows=0 loops=1)
│ -> Gather (cost=0.00..635802.67 rows=27003243 width=127) (actual time=0.214..5123.238 rows=26989780 loops=1)
│ Workers Planned: 8
│ Workers Launched: 7
│ -> Parallel Seq Scan on lineitem (cost=0.00..635802.67 rows=3375405 width=127) (actual time=0.025..649.887 rows=3373722 loops=8)
│ Filter: (l_suppkey > 5012)
│ Rows Removed by Filter: 376252
│ Planning time: 0.076 ms
│ Execution time: 5986.171 ms
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
(9 rows)

so there clearly is still benefit (this is scale 100, but that shouldn't
make much of a difference).

Did not review the code.

shm-mq-reduce-receiver-latch-set-v1.patch causes the receiver to only
consume input from the shared queue when the amount of unconsumed
input exceeds 1/4 of the queue size. This caused a large performance
improvement in my testing because it causes the number of times the
latch gets set to drop dramatically. I experimented a bit with
thresholds of 1/8 and 1/2 before setting on 1/4; 1/4 seems to be
enough to capture most of the benefit.

Hm. Is consuming the relevant part, or notifying the sender about it? I
suspect most of the benefit can be captured by updating bytes read (and
similarly on the other side w/ bytes written), but not setting the latch
unless thresholds are reached. The advantage of updating the value,
even without notifying the other side, is that in the common case that
the other side gets around to checking the queue without having blocked,
it'll see the updated value. If that works, that'd address the issue
that we might wait unnecessarily in a number of common cases.

Did not review the code.

remove-memory-leak-protection-v1.patch removes the memory leak
protection that Tom installed upon discovering that the original
version of tqueue.c leaked memory like crazy. I think that it
shouldn't do that any more, courtesy of
6b65a7fe62e129d5c2b85cd74d6a91d8f7564608. Assuming that's correct, we
can avoid a whole lot of tuple copying in Gather Merge and a much more
modest amount of overhead in Gather.

Yup, that conceptually makes sense.

Did not review the code.

Even with all of these patches applied, there's clearly still room for
more optimization, but MacOS's "sample" profiler seems to show that
the bottlenecks are largely shifting elsewhere:

Sort by top of stack, same collapsed (when >= 5):
slot_getattr (in postgres) 706
slot_deform_tuple (in postgres) 560
ExecAgg (in postgres) 378
ExecInterpExpr (in postgres) 372
AllocSetAlloc (in postgres) 319
_platform_memmove$VARIANT$Haswell (in
libsystem_platform.dylib) 314
read (in libsystem_kernel.dylib) 303
heap_compare_slots (in postgres) 296
combine_aggregates (in postgres) 273
shm_mq_receive_bytes (in postgres) 272

Interesting.  Here it's
+    8.79%  postgres  postgres            [.] ExecAgg
+    6.52%  postgres  postgres            [.] slot_deform_tuple
+    5.65%  postgres  postgres            [.] slot_getattr
+    4.59%  postgres  postgres            [.] shm_mq_send_bytes
+    3.66%  postgres  postgres            [.] ExecInterpExpr
+    3.44%  postgres  postgres            [.] AllocSetAlloc
+    3.08%  postgres  postgres            [.] heap_fill_tuple
+    2.34%  postgres  postgres            [.] heap_getnext
+    2.25%  postgres  postgres            [.] finalize_aggregates
+    2.08%  postgres  libc-2.24.so        [.] __memmove_avx_unaligned_erms
+    2.05%  postgres  postgres            [.] heap_compare_slots
+    1.99%  postgres  postgres            [.] execTuplesMatch
+    1.83%  postgres  postgres            [.] ExecStoreTuple
+    1.83%  postgres  postgres            [.] shm_mq_receive
+    1.81%  postgres  postgres            [.] ExecScan

I'm probably not super-excited about spending too much more time
trying to make the _platform_memmove time (only 20% or so of which
seems to be due to the shm_mq stuff) or the shm_mq_receive_bytes time
go down until, say, somebody JIT's slot_getattr and slot_deform_tuple.
:-)

Hm, let's say somebody were working on something like that. In that case
the benefits for this precise plan wouldn't yet be that big because a
good chunk of slot_getattr calls come from execTuplesMatch() which
doesn't really provide enough context to do JITing (when used for
hashaggs, there is more so it's JITed). Similarly gather merge's
heap_compare_slots() doesn't provide such context.

It's about ~9% currently, largely due to the faster aggregate
invocation. But the big benefit here would be all the deforming and the
comparisons...

- Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#21Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#20)
#22Jim Van Fleet
vanfleet@us.ibm.com
In reply to: Rafia Sabih (#1)
#23Andres Freund
andres@anarazel.de
In reply to: Jim Van Fleet (#22)
#24Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#20)
#25Jim Van Fleet
vanfleet@us.ibm.com
In reply to: Rafia Sabih (#1)
#26Andres Freund
andres@anarazel.de
In reply to: Jim Van Fleet (#25)
#27Jim Van Fleet
vanfleet@us.ibm.com
In reply to: Rafia Sabih (#1)
#28Andres Freund
andres@anarazel.de
In reply to: Jim Van Fleet (#27)
#29Jim Van Fleet
vanfleet@us.ibm.com
In reply to: Rafia Sabih (#1)
#30Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#24)
#31Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#30)
#32Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#31)
#33Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#32)
#34Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#33)
#35Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#34)
#36Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#35)
#37Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#36)
#38Rafia Sabih
rafia.sabih@enterprisedb.com
In reply to: Robert Haas (#37)
#39Robert Haas
robertmhaas@gmail.com
In reply to: Rafia Sabih (#38)
#40Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#39)
#41Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#39)
#42Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#41)
#43Rafia Sabih
rafia.sabih@enterprisedb.com
In reply to: Robert Haas (#39)
#44Ants Aasma
ants.aasma@cybertec.at
In reply to: Robert Haas (#42)
#45Robert Haas
robertmhaas@gmail.com
In reply to: Ants Aasma (#44)
#46Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#37)
#47Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#46)
#48Rafia Sabih
rafia.sabih@enterprisedb.com
In reply to: Andres Freund (#40)
#49Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#47)
#50Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#49)
#51Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#50)
#52Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#50)
#53Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#52)
#54Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#53)
#55Rafia Sabih
rafia.sabih@enterprisedb.com
In reply to: Robert Haas (#54)
#56Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#54)
#57Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#56)
#58Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#57)
#59Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#58)
#60Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#59)
#61Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#60)
#62Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#61)
#63Tels
nospam-pg-abuse@bloodgate.com
In reply to: Robert Haas (#62)
#64Bruce Momjian
bruce@momjian.us
In reply to: Tels (#63)