Inefficient bytea escaping?
When preparing to transfer blob data from one database to another (8.0.5
to 8.1.4), I found some interesting numbers that made me suspect that
bytea dumping is more ineffective than expectable.
I have a test dataset of 2000 rows, each row containing a bytea column.
Total disk usage of the table (no indexes) is 138MB, total data size is
1.4GB (sum(length(bytea_col)). Data is stored on a RAID5 (Areca 128MB,
SATA, 4 disks), and was dumped to a RAID1 on the same controller.
When dumping the table with psql \copy (non-binary), the resulting file
would be 6.6GB of size, taking about 5.5 minutes. Using psql \copy WITH
BINARY (modified psql as posted to -patches), the time was cut down to
21-22 seconds (filesize 1.4GB as expected), which is near the physical
throughput of the target disk. If server based COPY to file is used, The
same factor 12 can be observed, CPU is up to 100 % (single P4 3GHz 2MB
Cache HT disabled, 1GB main mem).
What's happening here?
Regards,
Andreas
Andreas Pflug <pgadmin@pse-consulting.de> writes:
When dumping the table with psql \copy (non-binary), the resulting file
would be 6.6GB of size, taking about 5.5 minutes. Using psql \copy WITH
BINARY (modified psql as posted to -patches), the time was cut down to
21-22 seconds (filesize 1.4GB as expected), which is near the physical
throughput of the target disk. If server based COPY to file is used, The
same factor 12 can be observed, CPU is up to 100 % (single P4 3GHz 2MB
Cache HT disabled, 1GB main mem).
This is with an 8.0.x server, right?
Testing a similar case with CVS HEAD, I see about a 5x speed difference,
which is right in line with the difference in the physical amount of
data written. (I was testing a case where all the bytes were emitted as
'\nnn', so it's the worst case.) oprofile says the time is being spent
in CopyAttributeOutText() and fwrite(). So I don't think there's
anything to be optimized here, as far as bytea goes: its binary
representation is just inherently a lot smaller.
Looking at CopySendData, I wonder whether any traction could be gained
by trying not to call fwrite() once per character. I'm not sure how
much per-call overhead there is in that function. We've done a lot of
work trying to optimize the COPY IN path since 8.0, but nothing much
on COPY OUT ...
regards, tom lane
Tom Lane wrote:
Andreas Pflug <pgadmin@pse-consulting.de> writes:
When dumping the table with psql \copy (non-binary), the resulting file
would be 6.6GB of size, taking about 5.5 minutes. Using psql \copy WITH
BINARY (modified psql as posted to -patches), the time was cut down to
21-22 seconds (filesize 1.4GB as expected), which is near the physical
throughput of the target disk. If server based COPY to file is used, The
same factor 12 can be observed, CPU is up to 100 % (single P4 3GHz 2MB
Cache HT disabled, 1GB main mem).This is with an 8.0.x server, right?
I've tested both 8.0.5 and 8.1.4, no difference observed.
Testing a similar case with CVS HEAD, I see about a 5x speed difference,
which is right in line with the difference in the physical amount of
data written.
That's what I would have expected, apparently the data is near worst case.
(I was testing a case where all the bytes were emitted as
'\nnn', so it's the worst case.) oprofile says the time is being spent
in CopyAttributeOutText() and fwrite(). So I don't think there's
anything to be optimized here, as far as bytea goes: its binary
representation is just inherently a lot smaller.
Unfortunately, binary isn't the cure for all, since copying normal data
with binary option might bloat that by factor two or so. I wish there
was a third option that's fine for both kinds of data. That's not only a
question of dump file sizes, but also of network throughput (an online
compression in the line protocol would be desirable for this).
Looking at CopySendData, I wonder whether any traction could be gained
by trying not to call fwrite() once per character. I'm not sure how
much per-call overhead there is in that function. We've done a lot of
work trying to optimize the COPY IN path since 8.0, but nothing much
on COPY OUT ...
Hm, I'll see whether I can manage to check CVS head too, and see what's
happening, not a production alternative though.
Regards,
Andreas
Andreas Pflug <pgadmin@pse-consulting.de> writes:
Tom Lane wrote:
Looking at CopySendData, I wonder whether any traction could be gained
by trying not to call fwrite() once per character. I'm not sure how
much per-call overhead there is in that function. We've done a lot of
work trying to optimize the COPY IN path since 8.0, but nothing much
on COPY OUT ...
Hm, I'll see whether I can manage to check CVS head too, and see what's
happening, not a production alternative though.
OK, make sure you get the copy.c version I just committed ...
regards, tom lane
Tom Lane wrote:
Andreas Pflug <pgadmin@pse-consulting.de> writes:
Tom Lane wrote:
Looking at CopySendData, I wonder whether any traction could be gained
by trying not to call fwrite() once per character. I'm not sure how
much per-call overhead there is in that function. We've done a lot of
work trying to optimize the COPY IN path since 8.0, but nothing much
on COPY OUT ...Hm, I'll see whether I can manage to check CVS head too, and see what's
happening, not a production alternative though.OK, make sure you get the copy.c version I just committed ...
Here are the results, with the copy patch:
psql \copy 1.4 GB from table, binary:
8.0 8.1 8.2dev
36s 34s 36s
psql \copy 1.4 GB to table, binary:
8.0 8.1 8.2dev
106s 95s 98s
psql \copy 6.6 GB from table, std:
8.0 8.1 8.2dev
375s 362s 290s (second:283s)
psql \copy 6.6 GB to table, std:
8.0 8.1 8.2dev
511s 230s 238s
INSERT INTO foo SELECT * FROM bar
8.0 8.1 8.2dev
75s 75s 75s
So obviously text COPY is enhanced by 20 % now, but it's still far from
the expected throughput. The dump disk should be capable of 60MB/s,
limiting text COPY to about 110 seconds, but the load process is CPU
restricted at the moment.
For comparision purposes, I included the in-server copy benchmarks as
well (bytea STORAGE EXTENDED; EXTERNAL won't make a noticable
difference). This still seems slower than expected to me, since the
table's on-disk footage is relatively small (138MB).
Regards,
Andreas
Andreas Pflug <pgadmin@pse-consulting.de> writes:
Here are the results, with the copy patch:
psql \copy 1.4 GB from table, binary:
8.0 8.1 8.2dev
36s 34s 36s
psql \copy 6.6 GB from table, std:
8.0 8.1 8.2dev
375s 362s 290s (second:283s)
Hmph. There's something strange going on on your platform (what is it
anyway?) Using CVS HEAD on Fedora Core 4 x86_64, I get
bytea=# copy t to '/home/tgl/t.out';
COPY 1024
Time: 273325.666 ms
bytea=# copy binary t to '/home/tgl/t.outb';
COPY 1024
Time: 62113.355 ms
Seems \timing doesn't work on \copy (annoying), so
$ time psql -c "\\copy t to '/home/tgl/t.out2'" bytea
real 3m47.507s
user 0m3.700s
sys 0m36.406s
$ ls -l t.*
-rw-r--r-- 1 tgl tgl 5120001024 May 26 12:58 t.out
-rw-rw-r-- 1 tgl tgl 5120001024 May 26 13:14 t.out2
-rw-r--r-- 1 tgl tgl 1024006165 May 26 13:00 t.outb
$
This test case is 1024 rows each containing a 1000000-byte bytea, stored
EXTERNAL (no on-disk compression), all bytes chosen to need expansion to
\nnn form. So the ratio in runtimes is in keeping with the amount of
data sent. It's interesting (and surprising) that the runtime is
actually less for psql \copy than for server COPY. This is a dual Xeon
machine, maybe the frontend copy provides more scope to use both CPUs?
It would be interesting to see what's happening on your machine with
oprofile or equivalent.
I can't test psql binary \copy just yet, but will look at applying your
recent patch so that case can be checked.
regards, tom lane
Tom Lane wrote:
Andreas Pflug <pgadmin@pse-consulting.de> writes:
Here are the results, with the copy patch:
psql \copy 1.4 GB from table, binary:
8.0 8.1 8.2dev
36s 34s 36spsql \copy 6.6 GB from table, std:
8.0 8.1 8.2dev
375s 362s 290s (second:283s)Hmph. There's something strange going on on your platform (what is it
anyway?)
Debian 2.6.26.
It's interesting (and surprising) that the runtime is
actually less for psql \copy than for server COPY. This is a dual Xeon
machine, maybe the frontend copy provides more scope to use both CPUs?
The dual CPU explanation sounds reasonable, but I found the same
tendency on a single 3GHz (HT disabled).
Strange observation using top:
user >90%, sys <10%, idle+wait 0% but only postmaster consumes cpu,
showing 35%, the rest neglectable.
It would be interesting to see what's happening on your machine with
oprofile or equivalent.
I'll investigate further, trying to find the missing CPU.
Regards,
Andreas
I wrote:
I can't test psql binary \copy just yet, but will look at applying your
recent patch so that case can be checked.
With patch applied:
$ time psql -c "\\copy t to '/home/tgl/t.out2'" bytea
real 3m46.057s
user 0m2.724s
sys 0m36.118s
$ time psql -c "\\copy t to '/home/tgl/t.outb2' binary" bytea
real 1m5.222s
user 0m0.640s
sys 0m6.908s
$ ls -l t.*
-rw-rw-r-- 1 tgl tgl 5120001024 May 26 16:02 t.out2
-rw-rw-r-- 1 tgl tgl 1024006165 May 26 16:03 t.outb2
The binary time is just slightly more than what I got before for a
server COPY:
bytea=# copy t to '/home/tgl/t.out';
COPY 1024
Time: 273325.666 ms
bytea=# copy binary t to '/home/tgl/t.outb';
COPY 1024
Time: 62113.355 ms
So those numbers seem to hang together, and it's just the text case
that is not making too much sense. I'm off for a little visit with
oprofile...
regards, tom lane
I wrote:
I'm off for a little visit with oprofile...
It seems the answer is that fwrite() does have pretty significant
per-call overhead, at least on Fedora Core 4. The patch I did yesterday
still ended up making an fwrite() call every few characters when dealing
with bytea text output, because it'd effectively do two fwrite()s per
occurrence of '\' in the data being output. I've committed a further
hack that buffers a whole data row before calling fwrite(). Even though
this presumably is adding one extra level of data copying, it seems to
make things noticeably faster:
bytea=# copy t to '/home/tgl/t.out';
COPY 1024
Time: 209842.139 ms
as opposed to 268 seconds before. We were already applying the
line-at-a-time buffering strategy for frontend copies, so that
path didn't change much (it's about 226 seconds for the same case).
At this point, a copy-to-file is just marginally faster than a
frontend copy happening on the local machine; which speaks well
for the level of optimization of the Linux send/recv calls.
More importantly, I see consistent results for the text and
binary cases.
Let me know what this does on your Debian machine ...
regards, tom lane
On 5/27/06, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I wrote:
I'm off for a little visit with oprofile...
It seems the answer is that fwrite() does have pretty significant
per-call overhead, at least on Fedora Core 4.
That may be because of the locking ritual all stdio functions
like to do, even without _REENTRANT.
If you want to use fwrite as string operator, then maybe
should replace it with fwrite_unlocked?
--
marko
Tom Lane wrote:
I wrote:
I'm off for a little visit with oprofile...
It seems the answer is that fwrite() does have pretty significant
per-call overhead, at least on Fedora Core 4. The patch I did yesterday
still ended up making an fwrite() call every few characters when dealing
with bytea text output, because it'd effectively do two fwrite()s per
occurrence of '\' in the data being output. I've committed a further
hack that buffers a whole data row before calling fwrite(). Even though
this presumably is adding one extra level of data copying, it seems to
make things noticeably faster:
(semi-OT) This recoding seems like a perfect preparation for a third
COPY format, compressed.
Let me know what this does on your Debian machine ...
Takes a while, need a different kernel booted because the current isn't
oprofile ready.
Regards,
Andreas
"Marko Kreen" <markokr@gmail.com> writes:
If you want to use fwrite as string operator, then maybe
should replace it with fwrite_unlocked?
ISTM that in a single-threaded application such as the backend,
it should be libc's responsibility to avoid such overhead, not
ours.
regards, tom lane
On 5/27/06, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Marko Kreen" <markokr@gmail.com> writes:
If you want to use fwrite as string operator, then maybe
should replace it with fwrite_unlocked?ISTM that in a single-threaded application such as the backend,
it should be libc's responsibility to avoid such overhead, not
ours.
Obviously, except glibc guys seems to be philosophically
opposed to this, so apps need to work around it.
AFAIK at least *BSDs have got this right, don't know
about others.
--
marko
On Sat, May 27, 2006 at 06:36:15PM +0300, Marko Kreen wrote:
ISTM that in a single-threaded application such as the backend,
it should be libc's responsibility to avoid such overhead, not
ours.Obviously, except glibc guys seems to be philosophically
opposed to this, so apps need to work around it.AFAIK at least *BSDs have got this right, don't know
about others.
Given there is no way to know if you're running single threaded or not,
I don't think glibc can take chances like that.
In any case, this isn't the issue here. Glibc doesn't do any locking
unless pthread is linked in. Ofcourse, it takes a few cycles to
determine that, but I don't think that'd cause a major slowdown.
Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
Show quoted text
From each according to his ability. To each according to his ability to litigate.
On 5/27/06, Martijn van Oosterhout <kleptog@svana.org> wrote:
Given there is no way to know if you're running single threaded or not,
I don't think glibc can take chances like that.
There's CPP symbol _REENTRANT for that and in run time,
libc can detect call to pthread_create [1]In the first thread I linked, there was very clever optimisation proposed using this function, that would quarantee thread-safety even without _REENTRANT..
In any case, this isn't the issue here. Glibc doesn't do any locking
unless pthread is linked in. Ofcourse, it takes a few cycles to
determine that, but I don't think that'd cause a major slowdown.
You are conflicting with your previous paragraph :)
Otherwise you are right - that how a libc obviously should work, right?
http://marc.theaimsgroup.com/?l=glibc-alpha&m=100775741325472&w=2
http://marc.theaimsgroup.com/?l=glibc-alpha&m=112110641923178&w=2
I did a small test that does several fputc calls to /dev/null,
with various workarounds:
* lock.enabled is standard app.
* lock.disabled calls __fsetlocking(FSETLOCKING_BYCALLER),
as suggested by Ulrich Drepper.
* lock.unlocked calls fputc_unlocked
lock.enabled 48s
lock.disabled 28s
lock.unlocked 25s
I attached the test, you can measure yourself.
So I prepared a patch that calls __fsetlocking() in AllocateFile.
Andreas, Tom could you measure if it makes any difference?
--
marko
[1]: In the first thread I linked, there was very clever optimisation proposed using this function, that would quarantee thread-safety even without _REENTRANT.
optimisation proposed using this function, that would
quarantee thread-safety even without _REENTRANT.
Unfortunately, event if U. Drepper changes his mind someday
and fixes the locking for singe-threaded apps, it would
very likely break binary compatibility with old apps,
so it wont happen in the near future.
Attachments:
disable-glibc-locking.difftext/plain; charset=us-ascii; name=disable-glibc-locking.diffDownload
Index: src/backend/storage/file/fd.c
===================================================================
RCS file: /projects/cvsroot/pgsql/src/backend/storage/file/fd.c,v
retrieving revision 1.127
diff -u -r1.127 fd.c
--- src/backend/storage/file/fd.c 5 Mar 2006 15:58:37 -0000 1.127
+++ src/backend/storage/file/fd.c 27 May 2006 16:54:36 -0000
@@ -46,6 +46,10 @@
#include <unistd.h>
#include <fcntl.h>
+#ifdef __GLIBC__
+#include <stdio_ext.h>
+#endif
+
#include "miscadmin.h"
#include "access/xact.h"
#include "storage/fd.h"
@@ -1258,6 +1262,11 @@
{
AllocateDesc *desc = &allocatedDescs[numAllocatedDescs];
+#ifdef __GLIBC__
+ /* disable glibc braindamaged locking */
+ __fsetlocking(file, FSETLOCKING_BYCALLER);
+#endif
+
desc->kind = AllocateDescFile;
desc->desc.file = file;
desc->create_subid = GetCurrentSubTransactionId();
On Sat, May 27, 2006 at 08:23:35PM +0300, Marko Kreen wrote:
On 5/27/06, Martijn van Oosterhout <kleptog@svana.org> wrote:
Given there is no way to know if you're running single threaded or not,
I don't think glibc can take chances like that.There's CPP symbol _REENTRANT for that and in run time,
libc can detect call to pthread_create [1].
There are a number of way to create threads, not all of which involve
pthread_create. I think my point is that you are not required to
declare _REENTRANT to get reentrant functions and there is no
_NOTREENTRANT symbol you can define.
I did a small test that does several fputc calls to /dev/null,
with various workarounds:
All your test proved was that it took 20 nanoseconds in each call to
fputc to determine no locking was required. I don't know how fast your
machine is, but thats probably just a few cycles. A better example
would be if there was actually some locking going on, i.e. add
-lpthread to the compile line. On my machine I get:
No -lpthread
lock.enabled 91s
lock.disabled 50s
lock.unlocked 36s
With -lpthread
lock.enabled 323s
lock.disabled 50s
lock.unlocked 36s
So yes, if you can guarentee no locking is required and tell glibc
that, you get optimal performace. But the *default* is to play it safe
and take a few extra cycles to check if locking is required at all.
Better than locking all the time wouldn't you agree? Just because your
app didn't declare _REENTRANT doesn't mean any of the libraries it
uses didn't.
The crux of the matter is though, if you're calling something a million
times, you're better off trying to find an alternative anyway. There is
a certain amount of overhead to calling shared libraries and no amount
of optimisation of the library is going save you that.
Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
Show quoted text
From each according to his ability. To each according to his ability to litigate.
On 5/28/06, Martijn van Oosterhout <kleptog@svana.org> wrote:
With -lpthread
lock.enabled 323s
lock.disabled 50s
lock.unlocked 36s
I forgot to test with -lpthread, my bad. Indeed by default
something less expensive that full locking is going on.
The crux of the matter is though, if you're calling something a million
times, you're better off trying to find an alternative anyway. There is
a certain amount of overhead to calling shared libraries and no amount
of optimisation of the library is going save you that.
The crux of the matter was if its possible to use fwrite
as easy string combining mechanism and the answer is no,
because it's not lightweight enough.
--
marko
Marko Kreen wrote:
On 5/28/06, Martijn van Oosterhout <kleptog@svana.org> wrote:
With -lpthread
lock.enabled 323s
lock.disabled 50s
lock.unlocked 36sI forgot to test with -lpthread, my bad. Indeed by default
something less expensive that full locking is going on.The crux of the matter is though, if you're calling something a million
times, you're better off trying to find an alternative anyway. There is
a certain amount of overhead to calling shared libraries and no amount
of optimisation of the library is going save you that.The crux of the matter was if its possible to use fwrite
as easy string combining mechanism and the answer is no,
because it's not lightweight enough.
IIRC the windows port make use of multi-threading to simulate signals and it's likely that
some add-on modules will bring in libs like pthread. It would be less ideal if PostgreSQL
was designed to take a significant performance hit when that happens. Especially if a viable
alternative exists.
Regards,
Thomas Hallgren
Marko Kreen wrote:
On 5/28/06, Martijn van Oosterhout <kleptog@svana.org> wrote:
With -lpthread
lock.enabled 323s
lock.disabled 50s
lock.unlocked 36sI forgot to test with -lpthread, my bad. Indeed by default
something less expensive that full locking is going on.The crux of the matter is though, if you're calling something a million
times, you're better off trying to find an alternative anyway. There is
a certain amount of overhead to calling shared libraries and no amount
of optimisation of the library is going save you that.The crux of the matter was if its possible to use fwrite
as easy string combining mechanism and the answer is no,
because it's not lightweight enough.
So your patch to src/backend/storage/file/fd.c should be discarded? OK.
--
Bruce Momjian http://candle.pha.pa.us
EnterpriseDB http://www.enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +
On 5/30/06, Bruce Momjian <pgman@candle.pha.pa.us> wrote:
The crux of the matter was if its possible to use fwrite
as easy string combining mechanism and the answer is no,
because it's not lightweight enough.So your patch to src/backend/storage/file/fd.c should be discarded? OK.
Yes. It was just for experimenting. As I understand Tom
already rewrote the critical path.
--
marko