Parallel COPY FROM execution

Started by Alexey Kondratovalmost 9 years ago8 messageshackers
Jump to latest
#1Alexey Kondratov
a.kondratov@postgrespro.ru

Greetings pgsql-hackers,

I am a GSOC student this year, my initial proposal has been discussed
in the following thread
/messages/by-id/7179F2FD-49CE-4093-AE14-1B26C5DFB0DA@gmail.com

Patch with COPY FROM errors handling seems to be quite finished, so
I have started thinking about parallelism in COPY FROM, which is the next
point in my proposal.

In order to understand are there any expensive calls in COPY, which
can be executed in parallel, I did a small research. First, please, find
flame graph of the most expensive copy.c calls during the 'COPY FROM file'
attached (copy_from.svg). It reveals, that inevitably serial operations like
CopyReadLine (<15%), heap_multi_insert (~15%) take less than 50% of
time in summary, while remaining operations like heap_form_tuple and
multiple checks inside NextCopyFrom probably can be executed well in parallel.

Second, I have compared an execution time of 'COPY FROM a single large
file (~300 MB, 50000000 lines)' vs. 'COPY FROM four equal parts of the
original file executed in the four parallel processes'. Though it is a
very rough test, it helps to obtain an overall estimation:

Serial:
real 0m56.571s
user 0m0.005s
sys 0m0.006s

Parallel (x4):
real 0m22.542s
user 0m0.015s
sys 0m0.018s

Thus, it results in a ~60% performance boost per each x2 multiplication of
parallel processes, which is consistent with the initial estimation.

After several discussions I have two possible solutions on my mind:

1) Simple solution

Let us focus only on the 'COPY FROM file', then it is relatively easy to
implement, just give the same file and offset to each worker.

++ Simple; more reliable solution; probably it will give us the most possible
performance boost

- - Limited number of use cases. Though 'COPY FROM file' is a frequent case,
even when one use it with psql \copy, client-side file read and stdin
streaming to the backend are actually performed

2) True parallelism

Implement a pool of bg_workers and simple shared_buffer/query. While main
COPY process will read an input data and put raw lines into the query, parallel
bg_workers will take lines from there and process.

++ More general solution; support of various COPY FROM use-cases

- - Much more sophisticated solution; probably less performance boost
compared to 1)

I will be glad to any comments and criticism.

Alexey

Attachments:

copy_from.svgimage/svg+xml; name=copy_from.svgDownload
#2Pavel Stehule
pavel.stehule@gmail.com
In reply to: Alexey Kondratov (#1)
Re: Parallel COPY FROM execution

2017-06-30 14:23 GMT+02:00 Alex K <kondratov.aleksey@gmail.com>:

Greetings pgsql-hackers,

I am a GSOC student this year, my initial proposal has been discussed
in the following thread
/messages/by-id/7179F2FD-49CE-
4093-AE14-1B26C5DFB0DA%40gmail.com

Patch with COPY FROM errors handling seems to be quite finished, so
I have started thinking about parallelism in COPY FROM, which is the next
point in my proposal.

In order to understand are there any expensive calls in COPY, which
can be executed in parallel, I did a small research. First, please, find
flame graph of the most expensive copy.c calls during the 'COPY FROM file'
attached (copy_from.svg). It reveals, that inevitably serial operations
like
CopyReadLine (<15%), heap_multi_insert (~15%) take less than 50% of
time in summary, while remaining operations like heap_form_tuple and
multiple checks inside NextCopyFrom probably can be executed well in
parallel.

Second, I have compared an execution time of 'COPY FROM a single large
file (~300 MB, 50000000 lines)' vs. 'COPY FROM four equal parts of the
original file executed in the four parallel processes'. Though it is a
very rough test, it helps to obtain an overall estimation:

Serial:
real 0m56.571s
user 0m0.005s
sys 0m0.006s

Parallel (x4):
real 0m22.542s
user 0m0.015s
sys 0m0.018s

Thus, it results in a ~60% performance boost per each x2 multiplication of
parallel processes, which is consistent with the initial estimation.

the important use case is big table with lot of indexes. Did you test
similar case?

Regards

Pavel

#3Alexey Kondratov
a.kondratov@postgrespro.ru
In reply to: Pavel Stehule (#2)
Re: Parallel COPY FROM execution

On Fri, Jun 30, 2017 at 3:35 PM, Pavel Stehule <pavel.stehule@gmail.com> wrote:

2017-06-30 14:23 GMT+02:00 Alex K <kondratov.aleksey@gmail.com>:

Thus, it results in a ~60% performance boost per each x2 multiplication of
parallel processes, which is consistent with the initial estimation.

the important use case is big table with lot of indexes. Did you test
similar case?

Not yet, I will try it, thank you for a suggestion. But how much is it
'big table' and 'lot of indexes' in numbers approximately?

Also, index updates and constraint checks performance are what I cannot
control during COPY execution, so probably I have not to care too much
about that. But of course, it is interesting, how does COPY perform in
that case.

Alexey

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Pavel Stehule
pavel.stehule@gmail.com
In reply to: Alexey Kondratov (#3)
Re: Parallel COPY FROM execution

2017-06-30 15:42 GMT+02:00 Alex K <kondratov.aleksey@gmail.com>:

On Fri, Jun 30, 2017 at 3:35 PM, Pavel Stehule <pavel.stehule@gmail.com>
wrote:

2017-06-30 14:23 GMT+02:00 Alex K <kondratov.aleksey@gmail.com>:

Thus, it results in a ~60% performance boost per each x2 multiplication

of

parallel processes, which is consistent with the initial estimation.

the important use case is big table with lot of indexes. Did you test
similar case?

Not yet, I will try it, thank you for a suggestion. But how much is it
'big table' and 'lot of indexes' in numbers approximately?

the size is about 1/3 RAM size, 60 columns, 30 indexes

Regards

Pavel

Show quoted text

Also, index updates and constraint checks performance are what I cannot
control during COPY execution, so probably I have not to care too much
about that. But of course, it is interesting, how does COPY perform in
that case.

Alexey

#5Pavel Stehule
pavel.stehule@gmail.com
In reply to: Pavel Stehule (#4)
Re: Parallel COPY FROM execution

2017-06-30 15:45 GMT+02:00 Pavel Stehule <pavel.stehule@gmail.com>:

2017-06-30 15:42 GMT+02:00 Alex K <kondratov.aleksey@gmail.com>:

On Fri, Jun 30, 2017 at 3:35 PM, Pavel Stehule <pavel.stehule@gmail.com>
wrote:

2017-06-30 14:23 GMT+02:00 Alex K <kondratov.aleksey@gmail.com>:

Thus, it results in a ~60% performance boost per each x2

multiplication of

parallel processes, which is consistent with the initial estimation.

the important use case is big table with lot of indexes. Did you test
similar case?

Not yet, I will try it, thank you for a suggestion. But how much is it
'big table' and 'lot of indexes' in numbers approximately?

the size is about 1/3 RAM size, 60 columns, 30 indexes

maybe some variants can be interesting .. 1/30 RAM, 1/20 RAM, 1/10 RAM, 1/3
RAM

and a) when bottleneck is IO, b) when bottleneck is CPU

Pavel

Show quoted text

Regards

Pavel

Also, index updates and constraint checks performance are what I cannot
control during COPY execution, so probably I have not to care too much
about that. But of course, it is interesting, how does COPY perform in
that case.

Alexey

#6Alexey Kondratov
a.kondratov@postgrespro.ru
In reply to: Pavel Stehule (#5)
Re: Parallel COPY FROM execution

Greetings pgsql-hackers,

I have completed the general infrastructure for parallel COPY FROM execution,
consisting of Main (master) process and multiple BGWorkers connected with master
using a personal message query (shm_mq).

Master process does:
- Dynamic shared memory allocation with parallel state across
BGWorkers and master
- Attaching every worker to the personal message query (shm_mq)
- Wait workers initialization using Latch
- Read raw text lines using CopyReadLine and puts them into shm_mq's
via round-robin to balance queries load
- When EOF is reached sends zero-length message and workers are safely
shut down when receive it
- Wait for worker until they complete their jobs using ConditionalVariable

Each BGWorker does:
- Signal master on initialization via Latch
- Receive raw text lines over the personal shm_mq and put them into
the log (for now)
- Reinitialize db connection using the same db_id and user_id as main process
- Signal master via ConditionalVariable on job done

All parallel state modifications are done under LWLocks.

You can find actual code here https://github.com/ololobus/postgres/pull/2/files
(it is still in progress, so has a lot of duplications and comments,
which are to-be-deleted)

To go forward I have to overcome some obstacles:

- Currently all copy.c methods are designed to work with one giant
structure – CopyState.
It includes buffers, many initial parameters which stay unchanged
and a few variables
which vary during COPY FROM execution. Since I need all these
parameters, I have to
obtain them somehow inside each BGWorker process. I see two possible
solutions here:

1) Perform BeginCopyFrom initialization inside master and put
required parameters into
shared memory. However, many of them are arrays of a variable size
(e.g. partition_tupconv_maps, force_notnull_flags), so I cannot
put them into shmem
inside one single struct. The best idea I have is to put each
parameter under the personal
shmem TOC key, which seems to be quite ugly.

2) Perform BeginCopyFrom initialization inside each BGWorker.
However, it also opens
input file/pipe for read, which is not necessary for workers and
may cause some
interference with master, but I can modify BeginCopyFrom.

- I have used both Latch and ConditionalVariable for the same
purpose–wait until some signal
occurs–and for me as an end user they perform quite similar. I
looked into the condition_variable.c
code and it uses Latch and SpinLock under the hood. So what are
differences and dis-/advantages
between Latch and ConditionalVariable?

I will be glad if someone will help me to find an answer to my
question; also any comments
and remarks to the overall COPY FROM processing architecture are very welcome.

Alexey

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Robert Haas
robertmhaas@gmail.com
In reply to: Alexey Kondratov (#6)
Re: Parallel COPY FROM execution

On Fri, Aug 11, 2017 at 9:55 AM, Alex K <kondratov.aleksey@gmail.com> wrote:

- I have used both Latch and ConditionalVariable for the same
purpose–wait until some signal
occurs–and for me as an end user they perform quite similar. I
looked into the condition_variable.c
code and it uses Latch and SpinLock under the hood. So what are
differences and dis-/advantages
between Latch and ConditionalVariable?

A ConditionVariable lets you signal the processes that are waiting
without needing to know in advance exactly which processes those are.
If you use latches directly, you'll have to somehow keep track of
which processes need to be signaled.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Daniel Gustafsson
daniel@yesql.se
In reply to: Robert Haas (#7)
Re: Parallel COPY FROM execution

On 11 Aug 2017, at 20:07, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, Aug 11, 2017 at 9:55 AM, Alex K <kondratov.aleksey@gmail.com> wrote:

- I have used both Latch and ConditionalVariable for the same
purpose–wait until some signal
occurs–and for me as an end user they perform quite similar. I
looked into the condition_variable.c
code and it uses Latch and SpinLock under the hood. So what are
differences and dis-/advantages
between Latch and ConditionalVariable?

A ConditionVariable lets you signal the processes that are waiting
without needing to know in advance exactly which processes those are.
If you use latches directly, you'll have to somehow keep track of
which processes need to be signaled.

Based on the discussion in this thread, and that a new version of the patch
hasn’t been submitted during the commitfest, I’m marking this Returned with
Feedback. Please re-submit a new version in an upcoming commitfest.

cheers ./daniel

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers