What exactly is our CRC algorithm?
Our CRC algorithm is a bit weird. It's supposedly CRC-32, with the same
polynomial as used in Ethernet et al, but it actually is not. The
comments refer to "Painless Guide to CRC Error Detection Algorithms" by
Ross N. Williams [1] (http://www.ross.net/crc/download/crc_v3.txt), but
I think it was implemented incorrectly.
As a test case, I used an input of a single zero byte. I calculated the
CRC using Postgres' INIT_CRC32+COMP_CRC32+FIN_CRC32, and compared with
various online CRC calculation tools and C snippets. The Postgres
algorithm produces the value 2D02EF72, while the correct one is
D202EF8D. The first and last byte are inverted. For longer inputs, the
values diverge, and I can't see any obvious pattern between the Postgres
and correct values.
There are many variants of CRC calculations, as explained in Ross's
guide. But ours doesn't seem to correspond to the reversed or reflected
variants either.
I compiled the code from Ross's document, and built a small test program
to test it. I used Ross's "reverse" lookup table, which is the same
table we use in Postgres. It produces this output:
Calculating CRC-32 (polynomial 04C11DB7) for a single zero byte:
D202EF8D 11010010000000101110111110001101 (simple)
2D02EF72 10101101000000101110111101110010 (lookup)
D202EF8D 11010010000000101110111110001101 (lookup reflected)
Hmm. So the simple, non-table driven, calculation gives the same result
as using the lookup table using the reflected lookup code. That's
expected; the lookup method is supposed to be the same, just faster.
However, using the "normal" lookup code, but with a "reflected" lookup
table, produces the same result as Postgres' algorithm. Indeed, that's
what we do in PostgreSQL. But AFAICS, that's an incorrect combination.
You're supposed to the non-reflected lookup table with the non-reflected
lookup code; you can't mix and match.
As far as I can tell, PostgreSQL's so-called CRC algorithm doesn't
correspond to any bit-by-bit CRC variant and polynomial. My math skills
are not strong enough to determine what the consequences of that are. It
might still be a decent checksum. Or not. I couldn't tell if the good
error detection properties of the normal CRC-32 polynomial apply to our
algorithm or not.
Thoughts? Attached is the test program I used for this.
- Heikki
Attachments:
On 2014-10-08 22:13:46 +0300, Heikki Linnakangas wrote:
Hmm. So the simple, non-table driven, calculation gives the same result as
using the lookup table using the reflected lookup code. That's expected; the
lookup method is supposed to be the same, just faster. However, using the
"normal" lookup code, but with a "reflected" lookup table, produces the same
result as Postgres' algorithm. Indeed, that's what we do in PostgreSQL. But
AFAICS, that's an incorrect combination. You're supposed to the
non-reflected lookup table with the non-reflected lookup code; you can't mix
and match.As far as I can tell, PostgreSQL's so-called CRC algorithm doesn't
correspond to any bit-by-bit CRC variant and polynomial. My math skills are
not strong enough to determine what the consequences of that are. It might
still be a decent checksum. Or not. I couldn't tell if the good error
detection properties of the normal CRC-32 polynomial apply to our algorithm
or not.
Additional interesting datapoints are that hstore and ltree contain the
same tables - but properly use the reflected computation.
Thoughts?
It clearly seems like a bad idea to continue with this - I don't think
anybody here knows which guarantees this gives us.
The question is how can we move away from this. There's unfortunately
two places that embed PGC32 that are likely to prove problematic when
fixing the algorithm: pg_trgm and tsgist both seem to include crc's in
their logic in a persistent way. I think we should provide
INIT/COMP/FIN_PG32 using the current algorithm for these.
If we're switching to a saner computation, we should imo also switch to
a better polynom - CRC-32C has better error detection capabilities than
CRC32 and is available in hardware. As we're paying the price pf
breaking compat anyway...
Arguably we could also say that given that there's been little evident
problems with the borked computation we could also switch to a much
faster hash instead of continuing to use crc...
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 09/10/14 10:13, Andres Freund wrote:
On 2014-10-08 22:13:46 +0300, Heikki Linnakangas wrote:
Hmm. So the simple, non-table driven, calculation gives the same result as
using the lookup table using the reflected lookup code. That's expected; the
lookup method is supposed to be the same, just faster. However, using the
"normal" lookup code, but with a "reflected" lookup table, produces the same
result as Postgres' algorithm. Indeed, that's what we do in PostgreSQL. But
AFAICS, that's an incorrect combination. You're supposed to the
non-reflected lookup table with the non-reflected lookup code; you can't mix
and match.As far as I can tell, PostgreSQL's so-called CRC algorithm doesn't
correspond to any bit-by-bit CRC variant and polynomial. My math skills are
not strong enough to determine what the consequences of that are. It might
still be a decent checksum. Or not. I couldn't tell if the good error
detection properties of the normal CRC-32 polynomial apply to our algorithm
or not.Additional interesting datapoints are that hstore and ltree contain the
same tables - but properly use the reflected computation.Thoughts?
It clearly seems like a bad idea to continue with this - I don't think
anybody here knows which guarantees this gives us.The question is how can we move away from this. There's unfortunately
two places that embed PGC32 that are likely to prove problematic when
fixing the algorithm: pg_trgm and tsgist both seem to include crc's in
their logic in a persistent way. I think we should provide
INIT/COMP/FIN_PG32 using the current algorithm for these.If we're switching to a saner computation, we should imo also switch to
a better polynom - CRC-32C has better error detection capabilities than
CRC32 and is available in hardware. As we're paying the price pf
breaking compat anyway...Arguably we could also say that given that there's been little evident
problems with the borked computation we could also switch to a much
faster hash instead of continuing to use crc...Greetings,
Andres Freund
Could a 64 bit variant of some kind be useful as an option - in addition
to a correct 32 bit?
As most people have 64 bit processors and storage is less constrained
now-a-days, as well as we tend to store much larger chunks of data.
Cheers,
Gavin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 10/09/2014 01:23 AM, Gavin Flower wrote:
On 09/10/14 10:13, Andres Freund wrote:
If we're switching to a saner computation, we should imo also switch to
a better polynom - CRC-32C has better error detection capabilities than
CRC32 and is available in hardware. As we're paying the price pf
breaking compat anyway...Arguably we could also say that given that there's been little evident
problems with the borked computation we could also switch to a much
faster hash instead of continuing to use crc...Could a 64 bit variant of some kind be useful as an option - in addition
to a correct 32 bit?
More bits allows you to detect more errors. That's the only advantage.
I've never heard that being a problem, so no, I don't think that's a
good idea.
As most people have 64 bit processors and storage is less constrained
now-a-days, as well as we tend to store much larger chunks of data.
That's irrelevant to the CRC in the WAL. Each WAL record is CRC'd
separately, and they tend to be very small (less than 8k typically)
regardless of how "large chunks of data" you store in the database.
- Heikki
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 10/09/2014 12:13 AM, Andres Freund wrote:
On 2014-10-08 22:13:46 +0300, Heikki Linnakangas wrote:
As far as I can tell, PostgreSQL's so-called CRC algorithm doesn't
correspond to any bit-by-bit CRC variant and polynomial. My math skills are
not strong enough to determine what the consequences of that are. It might
still be a decent checksum. Or not. I couldn't tell if the good error
detection properties of the normal CRC-32 polynomial apply to our algorithm
or not.Additional interesting datapoints are that hstore and ltree contain the
same tables - but properly use the reflected computation.Thoughts?
It clearly seems like a bad idea to continue with this - I don't think
anybody here knows which guarantees this gives us.The question is how can we move away from this. There's unfortunately
two places that embed PGC32 that are likely to prove problematic when
fixing the algorithm: pg_trgm and tsgist both seem to include crc's in
their logic in a persistent way. I think we should provide
INIT/COMP/FIN_PG32 using the current algorithm for these.
Agreed, it's not worth breaking pg_upgrade for this.
If we're switching to a saner computation, we should imo also switch to
a better polynom - CRC-32C has better error detection capabilities than
CRC32 and is available in hardware. As we're paying the price pf
breaking compat anyway...
Agreed.
Arguably we could also say that given that there's been little evident
problems with the borked computation we could also switch to a much
faster hash instead of continuing to use crc...
I don't feel like taking the leap. Once we switch to slice-by-4/8 and/or
use a hardware instruction when available, CRC is fast enough.
I came up with the attached patches. They do three things:
1. Get rid of the 64-bit CRC code. It's not used for anything, and
haven't been for years, so it doesn't seem worth spending any effort to
fix them.
2. Switch to CRC-32C (Castagnoli) for WAL and other places that don't
need to remain compatible across major versions.
3. Use the same lookup table for hstore and ltree, as used for the
legacy "almost CRC-32" algorithm. The tables are identical, so might as
well.
Any objections?
- Heikki
On 10/27/2014 06:02 PM, Heikki Linnakangas wrote:
I came up with the attached patches. They do three things:
1. Get rid of the 64-bit CRC code. It's not used for anything, and
haven't been for years, so it doesn't seem worth spending any effort to
fix them.2. Switch to CRC-32C (Castagnoli) for WAL and other places that don't
need to remain compatible across major versions.3. Use the same lookup table for hstore and ltree, as used for the
legacy "almost CRC-32" algorithm. The tables are identical, so might as
well.Any objections?
I hear none, so committed with some small fixes.
- Heikki
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Nov 4, 2014 at 4:47 AM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:
I hear none, so committed with some small fixes.
Are you going to get the slice-by-N stuff working next, to speed this up?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2014-11-04 08:21:13 -0500, Robert Haas wrote:
On Tue, Nov 4, 2014 at 4:47 AM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:I hear none, so committed with some small fixes.
Are you going to get the slice-by-N stuff working next, to speed this up?
I don't plan to do anything serious with it, but I've hacked up the crc
code to use the hardware instruction. The results are quite good - crc
vanishes entirely from the profile for most workloads. It's still
visible for bulk copy, but that's pretty much it.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 11/04/2014 03:21 PM, Robert Haas wrote:
On Tue, Nov 4, 2014 at 4:47 AM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:I hear none, so committed with some small fixes.
Are you going to get the slice-by-N stuff working next, to speed this up?
I don't have any concrete plans, but yeah, that would be great. So
definitely maybe.
- Heikki
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Nov 4, 2014 at 3:17 PM, Heikki Linnakangas <hlinnakangas@vmware.com>
wrote:
On 10/27/2014 06:02 PM, Heikki Linnakangas wrote:
I came up with the attached patches. They do three things:
1. Get rid of the 64-bit CRC code. It's not used for anything, and
haven't been for years, so it doesn't seem worth spending any effort to
fix them.2. Switch to CRC-32C (Castagnoli) for WAL and other places that don't
need to remain compatible across major versions.
Will this change allow database created before this commit to be
started after this commit?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On 11/07/2014 07:08 AM, Amit Kapila wrote:
On Tue, Nov 4, 2014 at 3:17 PM, Heikki Linnakangas <hlinnakangas@vmware.com>
wrote:On 10/27/2014 06:02 PM, Heikki Linnakangas wrote:
I came up with the attached patches. They do three things:
1. Get rid of the 64-bit CRC code. It's not used for anything, and
haven't been for years, so it doesn't seem worth spending any effort to
fix them.2. Switch to CRC-32C (Castagnoli) for WAL and other places that don't
need to remain compatible across major versions.Will this change allow database created before this commit to be
started after this commit?
No. You could use pg_resetxlog to fix the WAL, but I think at least
relmap files would still prevent you from starting up. You could use
pg_upgrade.
- Heikki
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
At 2014-11-04 14:40:36 +0100, andres@2ndquadrant.com wrote:
On 2014-11-04 08:21:13 -0500, Robert Haas wrote:
Are you going to get the slice-by-N stuff working next, to speed
this up?I don't plan to do anything serious with it, but I've hacked up the
crc code to use the hardware instruction.
I'm working on this (first speeding up the default calculation using
slice-by-N, then adding support for the SSE4.2 CRC instruction on top).
-- Abhijit
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Nov 11, 2014 at 6:26 AM, Abhijit Menon-Sen <ams@2ndquadrant.com> wrote:
At 2014-11-04 14:40:36 +0100, andres@2ndquadrant.com wrote:
On 2014-11-04 08:21:13 -0500, Robert Haas wrote:
Are you going to get the slice-by-N stuff working next, to speed
this up?I don't plan to do anything serious with it, but I've hacked up the
crc code to use the hardware instruction.I'm working on this (first speeding up the default calculation using
slice-by-N, then adding support for the SSE4.2 CRC instruction on top).
Great!
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
At 2014-11-11 16:56:00 +0530, ams@2ndQuadrant.com wrote:
I'm working on this (first speeding up the default calculation using
slice-by-N, then adding support for the SSE4.2 CRC instruction on
top).
I've done the first part in the attached patch, and I'm working on the
second (especially the bits to issue CPUID at startup and decide which
implementation to use).
As a benchmark, I ran pg_xlogdump --stats against 11GB of WAL data (674
segments) generated by running a total of 2M pgbench transactions on a
db initialised with scale factor 25. The tests were run on my i5-3230
CPU, and the code in each case was compiled with "-O3 -msse4.2" (and
without --enable-debug). The profile was dominated by the CRC
calculation in ValidXLogRecord.
With HEAD's CRC code:
bin/pg_xlogdump --stats wal/000000010000000000000001 29.81s user 3.56s system 77% cpu 43.274 total
bin/pg_xlogdump --stats wal/000000010000000000000001 29.59s user 3.85s system 75% cpu 44.227 total
With slice-by-4 (a minor variant of the attached patch; the results are
included only for curiosity's sake, but I can post the code if needed):
bin/pg_xlogdump --stats wal/000000010000000000000001 13.52s user 3.82s system 48% cpu 35.808 total
bin/pg_xlogdump --stats wal/000000010000000000000001 13.34s user 3.96s system 48% cpu 35.834 total
With slice-by-8 (i.e. the attached patch):
bin/pg_xlogdump --stats wal/000000010000000000000001 7.88s user 3.96s system 34% cpu 34.414 total
bin/pg_xlogdump --stats wal/000000010000000000000001 7.85s user 4.10s system 34% cpu 35.001 total
(Note the progressive reduction in user time from ~29s to ~8s.)
Finally, just for comparison, here's what happens when we use the
hardware instruction via gcc's __builtin_ia32_crc32xx intrinsics
(i.e. the additional patch I'm working on):
bin/pg_xlogdump --stats wal/000000010000000000000001 3.33s user 4.79s system 23% cpu 34.832 total
There are a number of potential micro-optimisations, I just wanted to
submit the obvious thing first and explore more possibilities later.
-- Abhijit
Attachments:
slice8.difftext/x-diff; charset=us-asciiDownload+604-67
On 11/19/2014 05:58 PM, Abhijit Menon-Sen wrote:
At 2014-11-11 16:56:00 +0530, ams@2ndQuadrant.com wrote:
I'm working on this (first speeding up the default calculation using
slice-by-N, then adding support for the SSE4.2 CRC instruction on
top).I've done the first part in the attached patch, and I'm working on the
second (especially the bits to issue CPUID at startup and decide which
implementation to use).As a benchmark, I ran pg_xlogdump --stats against 11GB of WAL data (674
segments) generated by running a total of 2M pgbench transactions on a
db initialised with scale factor 25.
That's an interesting choice of workload. That sure is heavy on the CRC
calculation, but the speed of pg_xlogdump hardly matters in real life.
With HEAD's CRC code:
bin/pg_xlogdump --stats wal/000000010000000000000001 29.81s user 3.56s system 77% cpu 43.274 total
bin/pg_xlogdump --stats wal/000000010000000000000001 29.59s user 3.85s system 75% cpu 44.227 totalWith slice-by-4 (a minor variant of the attached patch; the results are
included only for curiosity's sake, but I can post the code if needed):bin/pg_xlogdump --stats wal/000000010000000000000001 13.52s user 3.82s system 48% cpu 35.808 total
bin/pg_xlogdump --stats wal/000000010000000000000001 13.34s user 3.96s system 48% cpu 35.834 totalWith slice-by-8 (i.e. the attached patch):
bin/pg_xlogdump --stats wal/000000010000000000000001 7.88s user 3.96s system 34% cpu 34.414 total
bin/pg_xlogdump --stats wal/000000010000000000000001 7.85s user 4.10s system 34% cpu 35.001 total(Note the progressive reduction in user time from ~29s to ~8s.)
Finally, just for comparison, here's what happens when we use the
hardware instruction via gcc's __builtin_ia32_crc32xx intrinsics
(i.e. the additional patch I'm working on):bin/pg_xlogdump --stats wal/000000010000000000000001 3.33s user 4.79s system 23% cpu 34.832 total
Impressive!
It would be good to see separate benchmarks on WAL generation, and WAL
replay. pg_xlogdump is probably close to WAL replay, but the WAL
generation case is somewhat different, as WAL is generated in small
pieces, and each piece is accumulated to the CRC separately. The
slice-by-X algorithm might be less effective in that scenario. I have no
doubt that it's still a lot faster than the byte-at-a-time operation,
but would be nice to have numbers on it.
- Heikki
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Nov 19, 2014 at 11:44 AM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:
That's an interesting choice of workload. That sure is heavy on the CRC
calculation, but the speed of pg_xlogdump hardly matters in real life.
But isn't a workload that is heavy on CRC calculation exactly what we
want here? That way we can see clearly how much benefit we're getting
on that particular part of the computation. It'll still speed up
other workloads, too, just not as much.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
On Wed, Nov 19, 2014 at 11:44 AM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:That's an interesting choice of workload. That sure is heavy on the CRC
calculation, but the speed of pg_xlogdump hardly matters in real life.
But isn't a workload that is heavy on CRC calculation exactly what we
want here? That way we can see clearly how much benefit we're getting
on that particular part of the computation. It'll still speed up
other workloads, too, just not as much.
Heikki's point is that it's an unrealistic choice of CRC chunk size.
Maybe that doesn't matter very much, but it's unproven.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 11/19/2014 06:49 PM, Robert Haas wrote:
On Wed, Nov 19, 2014 at 11:44 AM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:That's an interesting choice of workload. That sure is heavy on the CRC
calculation, but the speed of pg_xlogdump hardly matters in real life.But isn't a workload that is heavy on CRC calculation exactly what we
want here? That way we can see clearly how much benefit we're getting
on that particular part of the computation. It'll still speed up
other workloads, too, just not as much.
Sure. But pg_xlogdump's way of using the CRC isn't necessarily
representative of how the backend uses it. It's probably pretty close to
WAL replay in the server, but even there the server might be hurt more
by the extra cache used by the lookup tables. And a backend generating
the WAL computes the CRC on smaller pieces than pg_xlogdump and WAL redo
does.
That said, the speedup is so large that I'm sure this is a big win in
the server too, despite those factors.
- Heikki
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2014-11-19 19:12:22 +0200, Heikki Linnakangas wrote:
On 11/19/2014 06:49 PM, Robert Haas wrote:
On Wed, Nov 19, 2014 at 11:44 AM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:That's an interesting choice of workload. That sure is heavy on the CRC
calculation, but the speed of pg_xlogdump hardly matters in real life.But isn't a workload that is heavy on CRC calculation exactly what we
want here? That way we can see clearly how much benefit we're getting
on that particular part of the computation. It'll still speed up
other workloads, too, just not as much.Sure. But pg_xlogdump's way of using the CRC isn't necessarily
representative of how the backend uses it. It's probably pretty close to WAL
replay in the server, but even there the server might be hurt more by the
extra cache used by the lookup tables. And a backend generating the WAL
computes the CRC on smaller pieces than pg_xlogdump and WAL redo does.
Right. Although it hugely depends on the checkpoint settings - if
there's many FPWs it doesn't matter much.
Obviously it won't be a fourfold performance improvement in the server,
but given the profiles I've seen in the past I'm pretty sure it'll bee a
benefit.
That said, the speedup is so large that I'm sure this is a big win in the
server too, despite those factors.
Yep. I've done some fast and loose benchmarking in the past and it was
quite noticeable. Made XLogInsert() nearly entirely drop from profiles.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
At 2014-11-19 19:12:22 +0200, hlinnakangas@vmware.com wrote:
But pg_xlogdump's way of using the CRC isn't necessarily
representative of how the backend uses it. It's probably pretty close
to WAL replay in the server, but even there the server might be hurt
more by the extra cache used by the lookup tables.
Sure. As Robert said, my initial benchmark was designed to show the CRC
improvements in isolation. I would be happy to conduct other tests and
post the numbers.
If I understand correctly, I need to demonstrate two things that are
"probably fine", but we don't have proof of:
(a) that the improvements in pg_xlogdump performance translate to an
improvement in the server when reading WAL.
(b) that the slice-by-8 code doesn't hurt performance for writing WAL.
To address (a), I am timing a standby restoring the same 11GB of WAL via
restore_command with and without the CRC patch. My earlier tests showed
that this time can vary quite a bit between runs even with no changes,
but I expect to see an improvement anyway.
Suggestions for how to address (b) are welcome.
-- Abhijit
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers