BUG #15307: Low numerical precision of (Co-) Variance

Started by PG Bug reporting formover 7 years ago12 messageshackersbugs
Jump to latest
#1PG Bug reporting form
noreply@postgresql.org
hackersbugs

The following bug has been logged on the website:

Bug reference: 15307
Logged by: Erich Schubert
Email address: erich@debian.org
PostgreSQL version: 9.6.6
Operating system: Linux
Description:

Numerical precision of variance computations in PostgreSQL is too low.

SELECT VAR_SAMP(x::float8), COVAR_SAMP(x, x), VAR_SAMP(x)
FROM (SELECT 1000000.01 as x UNION SELECT 999999.99 as x) AS x

The first two give the low-precision answer 0.000244140625 instead of
0.0002. Interestingly enough, VAR_SAMP(x) is okay - I guess that postgres
may autodetect a fixed-precision decimal here, or use some high precision
code.

If you add just another digit (10 million +- 1 cent), the bad functions even
return a variance of 0:

SELECT VAR_SAMP(x::float8), COVAR_SAMP(x, x), VAR_SAMP(x)
FROM (SELECT 10000000.01 as x UNION SELECT 9999999.99 as x) AS x

The reason is catastrophic cancellation. Apparently, the covariance is
computed using the E[XY]-E[X]E[Y] approach, which suffers from low numerical
precision.

While I report this against version 9.6.6 (since I used sqlfiddle), this
clearly is present still in Git:

https://github.com/postgres/postgres/blob/6bf0bc842bd75877e31727eb559c6a69e237f831/src/backend/utils/adt/float.c#L2606
https://github.com/postgres/postgres/blob/6bf0bc842bd75877e31727eb559c6a69e237f831/src/backend/utils/adt/float.c#L2969

it should also affect the general "numeric" version (i.e. all versions of
variance/covariance/standard deviation use the known-bad equation), but for
integers it usually will be fine as long as the sum-of-squares can be
stored:
https://github.com/postgres/postgres/blob/6bf0bc842bd75877e31727eb559c6a69e237f831/src/backend/utils/adt/numeric.c#L4883

There are a number of methods to get a reasonable precision. The simplest to
implement is a two pass approach: first compute the mean of each variable,
then compute E[(X-meanX)*(Y-meanY)] in a second pass. This will usually give
the best precision. Faster (single-pass) methods can be found in literature
from the 1970s, and also Donald Knuth's "The Art of Computer Programming".
In particular Young and Cramer's version performed quite well (and
surprisingly much better than Welford's; supposedly due to CPU pipelining).
Single-pass and parallelization-friendly approaches (interesting to use in
particular in distributed databases, but also to avoid IO cost) with good
accuracy are studied in:

Erich Schubert, and Michael Gertz. Numerically Stable Parallel Computation

of (Co-)Variance. In: Proceedings of the 30th International Conference on
Scientific and Statistical Database Management (SSDBM), Bolzano-Bozen,
Italy. 2018, 10:1–10:12. DOI: 10.1145/3221269.3223036.
https://dl.acm.org/citation.cfm?doid=3221269.3223036

The problem has also been observed in other SQL databases (MS - others like
MySQL have implemented a numeric stable single-pass approach), see e.g.:

Kamat, Niranjan, and Arnab Nandi. "A Closer Look at Variance

Implementations In Modern Database Systems." ACM SIGMOD Record 45.4 (2017):
28-33. DOI: 10.1145/3092931.3092936.
https://dl.acm.org/citation.cfm?id=3092936

Sorry, I do not know the PostgreSQL internals (aggregation...) well enough
to provide a patch.

#2Dean Rasheed
dean.a.rasheed@gmail.com
In reply to: PG Bug reporting form (#1)
hackersbugs
Re: BUG #15307: Low numerical precision of (Co-) Variance

On 1 August 2018 at 14:35, PG Bug reporting form <noreply@postgresql.org> wrote:

Numerical precision of variance computations in PostgreSQL is too low.

SELECT VAR_SAMP(x::float8), COVAR_SAMP(x, x), VAR_SAMP(x)
FROM (SELECT 1000000.01 as x UNION SELECT 999999.99 as x) AS x

The first two give the low-precision answer 0.000244140625 instead of
0.0002. Interestingly enough, VAR_SAMP(x) is okay ...

For a number of those statistical aggregates, PostgreSQL provides 2
implementations -- one implemented using double precision floating
point arithmetic, which is much faster, but necessarily less accurate
and possibly platform-dependent; and one implemented using the
arbitrary precision numeric datatype, which will return much more
accurate results. For any input datatypes other than floating point,
you will automatically get the latter, which is what you're seeing
with var_samp(x), when you're not explicitly casting the input to
float8.

However, all-the 2-argument aggregates such as corr() and
covar_pop/samp() currently only have floating point implementations,
and suffer from the problem you report, which I agree, is not great.
If we can easily improve the accuracy of those aggregates, then I
think it is worthwhile.

Using a two pass approach isn't really an option, given the way that
aggregates work in PostgreSQL, however, implementing Welford's
algorithm looks to be quite straightforward. I had a quick play and I
found that it fixed the accuracy problem with no noticeable
performance penalty -- there are a few extra cycles in the accumulator
functions, but compared to the overall per-tuple overhead, that
appears to be negligible.

I'll post something shortly.

Regards,
Dean

#3Erich Schubert
erich@debian.org
In reply to: Dean Rasheed (#2)
hackersbugs
Re: BUG #15307: Low numerical precision of (Co-) Variance

Hi,

For a number of those statistical aggregates, PostgreSQL provides 2
implementations -- one implemented using double precision floating
point arithmetic, which is much faster, but necessarily less accurate
and possibly platform-dependent; and one implemented using the
arbitrary precision numeric datatype, which will return much more
accurate results. For any input datatypes other than floating point,
you will automatically get the latter, which is what you're seeing
with var_samp(x), when you're not explicitly casting the input to
float8.

Ok, I am surprised that this is possible to do with arbitrary precision
here, as for example with 8 data points, it will eventually involve a
division by 7, which cannot be represented even with arbitrary (finite)
precision floating point. But maybe just this division comes late enough
(after the difference) that it just works before being converted to a
float for the division.

However, all-the 2-argument aggregates such as corr() and
covar_pop/samp() currently only have floating point implementations,
and suffer from the problem you report, which I agree, is not great.
If we can easily improve the accuracy of those aggregates, then I
think it is worthwhile.

Using a two pass approach isn't really an option, given the way that
aggregates work in PostgreSQL, however, implementing Welford's
algorithm looks to be quite straightforward. I had a quick play and I
found that it fixed the accuracy problem with no noticeable
performance penalty -- there are a few extra cycles in the accumulator
functions, but compared to the overall per-tuple overhead, that
appears to be negligible.

Instead of Welford, I recommend to use the Youngs-Cramer approach. It is
almost the same; roughly it is aggregating sum-X and sum((X-meanX)²)
directly (whereas Welford updates mean-X, and sum((X-meanX)^2)). So the
first aggregate is actually unchanged to the current approach.
In my experiments, this was surprisingly much faster when the data was a
double[]; explainable by CPU pipelining (see the Figure in the paper I
linked - the division comes late, and the next step does not depend on
the division, so pipelining can already process the next double without
waiting for the division to finish).

Youngs & Cramer original publication:
https://www.jstor.org/stable/pdf/1267176.pdf

The (non-parallel) implementation of the classic algorithms used in the
SSDBM 2018 paper are (templated only for aggregate precision; no guard
against len=0):

template <typename T = double>
double youngs_cramer_var(const double* const data, const size_t len) {
    T sum = data[0], var = 0;
    for(size_t i=1; i<len;) {
        const size_t oldi = i;
        const double x = data[i++];
        sum += x;
        double tmp = i * x - sum;
        var += tmp * tmp / (i * (double) oldi);
    }
    return var / len;
}

Close to Welfords original article:

template <typename T = double>
double welford_var(const double* const data, const size_t len) {
    T mean = data[0], var = 0;
    for(size_t i=1; i<len; ) {
        const size_t oldi = i;
        const double x = data[i++];
        const double delta = x - mean;
        mean = mean * oldi / (double) i + x / i;
        var += oldi / (double) i * delta * delta;
    }
    return var / len;
}

From Knuth's book, attributed to Welford (but usuing fewer divisons),
likely the most widely known incremental version:

template <typename T = double>
double knuth_var(const double* const data, const size_t len) {
    T mean = data[0], xx = 0;
    for(size_t i=1; i<len;) {
        const double v = data[i++];
        const double delta = v - mean;
        mean += delta / i;
        xx += delta * (v - mean);
    }
    return xx / len;
}

In both Welford and Knuth, the "mean" aggregate depends on the division;
with YC it does not, so I recommend YC.

If Postgres could use parallel aggregation, let me know. I can assist
with the proper aggregation of several partitions (both distributed, or
multithreaded). Using multiple partitions can also slightly improve
precision; but it is mostly interesting to process multiple chunks in
parallel.
If I have some spare time to clean it up some more, I intend to
open-source the entire experimental code from SSDBM18 for reproducibility.

Regards,
Erich

#4Dean Rasheed
dean.a.rasheed@gmail.com
In reply to: Erich Schubert (#3)
hackersbugs
Re: BUG #15307: Low numerical precision of (Co-) Variance

On 7 August 2018 at 12:58, Erich Schubert <erich@debian.org> wrote:

I am surprised that this is possible to do with arbitrary precision
here, as for example with 8 data points, it will eventually involve a
division by 7, which cannot be represented even with arbitrary (finite)
precision floating point. But maybe just this division comes late enough
(after the difference) that it just works before being converted to a float
for the division.

Right, the division comes at the end in that case.

Instead of Welford, I recommend to use the Youngs-Cramer approach. It is
almost the same; roughly it is aggregating sum-X and sum((X-meanX)²)
directly (whereas Welford updates mean-X, and sum((X-meanX)^2)). So the
first aggregate is actually unchanged to the current approach.
In my experiments, this was surprisingly much faster when the data was a
double[]; explainable by CPU pipelining (see the Figure in the paper I
linked - the division comes late, and the next step does not depend on the
division, so pipelining can already process the next double without waiting
for the division to finish).

Youngs & Cramer original publication:
https://www.jstor.org/stable/pdf/1267176.pdf

OK, thanks for the reference. That was very interesting.

It was actually the Knuth variant of the Welford algorithm that I was
playing with, but I have now tried out the Youngs-Cramer algorithm as
well.

In terms of performance the YC algorithm was maybe 2-3% slower than
Welford, which was roughly on a par with the current schoolbook
algorithm in PostgreSQL, but there was some variability in those
results, depending on the exact query in question. One thing to bear
in mind is that the kind of micro-optimisations that will make a huge
difference when processing in-memory data in tight loops as in your
examples are not nearly as important in the PostgreSQL environment
where there is a significant per-tuple overhead in fetching data,
depending on the enclosing SQL query. So the odd cycle here and there
from a division isn't going to matter much in this context.

Both algorithms eliminate the main loss-of-precision problem that the
current schoolbook algorithm suffers from, which I think has to be the
primary aim of this. The YC paper suggests that the results from their
algorithm might be slightly more accurate (although maybe not enough
that we really care). What I do like about the YC algorithm is that it
guarantees the same result from avg(), whereas the Welford algorithm
will change avg() slightly, possibly making it very slightly less
accurate.

So I concur, the YC algorithm is probably preferable. For the record,
attached are both versions that I tried.

If Postgres could use parallel aggregation, let me know. I can assist with
the proper aggregation of several partitions (both distributed, or
multithreaded). Using multiple partitions can also slightly improve
precision; but it is mostly interesting to process multiple chunks in
parallel.

PostgreSQL isn't multithreaded, but it will parallelise large
aggregates by distributing the computation over a small number of
worker backend processes, and then combining the results. I did a
quick back-of-the-envelope calculation to see how to combine the
results, and in testing it seemed to work correctly, but it would be
worth having a second pair of eyes to validate that.

I'll add this to the next commitfest for consideration.

Regards,
Dean

Attachments:

implement-float-aggs-using-welford.patchtext/x-patch; charset=US-ASCII; name=implement-float-aggs-using-welford.patchDownload+342-246
implement-float-aggs-using-youngs-cramer.patchtext/x-patch; charset=US-ASCII; name=implement-float-aggs-using-youngs-cramer.patchDownload+348-247
#5Dean Rasheed
dean.a.rasheed@gmail.com
In reply to: Dean Rasheed (#4)
hackersbugs
Re: BUG #15307: Low numerical precision of (Co-) Variance

On 9 August 2018 at 12:02, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:

... the YC algorithm is probably preferable. For the record,
attached are both versions that I tried.

Here is an updated, more complete patch, based on the YC algorithm,
with updated regression tests for the September commitfest.

All the existing tests pass unchanged, although I'm somewhat surprised
that the current tests pass with no platform variations. I've added
new tests to cover infinity/NaN handling, parallel aggregation and
confirm the improved accuracy with large offsets. The latter tests
operate well within in the limits of double precision arithmetic, so I
wouldn't expect any platform variation, but that's difficult
guarantee. If there are problems, it may be necessary to round the
test results.

Notable changes from the previous patch:

I have rewritten the overflow checks in the accum functions to be
clearer and more efficient which, if anything, makes these aggregates
now slightly faster than HEAD. More importantly though, I've added
explicit code to force Sxx to be NaN if any input is infinite, which
the previous coding didn't guarantee. I think NaN is the right result
for quantities like variance, if any input value is infinite, since it
logically involves 'infinity minus infinity'. That's also consistent
with the current behaviour.

I have also made the aggregate combine functions SQL-callable to make
testing easier -- there was a bug in the previous version due to a
typo which meant that float8_regr_combine() was incorrect when N1 was
non-zero and N2 was zero. That situation is unlikely to happen in
practice, and difficult to provoke deliberately without manually
calling the combine function, which is why I didn't spot it before.
The new tests cover all code branches, and make it easier to see that
the combine functions are producing the correct results.

Regards,
Dean

Attachments:

implement-float-aggs-using-youngs-cramer-v2.patchapplication/octet-stream; name=implement-float-aggs-using-youngs-cramer-v2.patchDownload+627-256
#6Madeleine Thompson
madeleineth@gmail.com
In reply to: Dean Rasheed (#5)
hackersbugs
Re: BUG #15307: Low numerical precision of (Co-) Variance

The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: not tested
Documentation: not tested

This is my first PostgreSQL review. Hopefully I'm getting it right. I independently ran into the bug in question and found that the author had already written a patch. I'm happy the author wrote this.

(1) I am not experienced with PostgreSQL internals, so I can't comment on the coding style or usage of internal functions.

(2) The patch applies cleanly to ce4887bd025b95c7b455fefd817a418844c6aad3. "make check", "make check-world", and "make install" pass.

(3) The patch addresses the numeric inaccuracy reported in the bug. (Yay!) I believe the tests are sufficient to confirm that it works as intended. I don't think the test coverage *before* this patch was sufficient, and I appreciate the improved test coverage of the combine functions. I verified the new tests manually.

(4) The comments cite Youngs & Cramer (1971). This is a dated citation. It justifies its algorithms based on pre-IEEE754 single-precision arithmetic. Most notably, it assumes that floating-point division is not exact. That said, the algorithm is still a good choice; it is justified now because compared to the standard Welford variance, it is less prone to causing pipeline stalls on a modern CPU. Maybe link to Schubert & Gentz 2018 (https://dl.acm.org/citation.cfm?id=3223036) instead. The new float8_combine combine code is (almost) S&G eqn. 22; the float8_accum code is S&G eqn. 28. float8_regr_accum and float8_regr_combine are S&G eqn. 22.

(5) I think the comment /* Watch out for roundoff error producing a negative variance */ and associated check are obsolete now.

The new status of this patch is: Waiting on Author

#7Madeleine Thompson
madeleineth@gmail.com
In reply to: Madeleine Thompson (#6)
hackersbugs
Re: BUG #15307: Low numerical precision of (Co-) Variance

By the way, I did not see the discussion thread or notice that an author of the paper I mentioned and the reporter of the bug were the same person until after I wrote the review. Oops.

#8Dean Rasheed
dean.a.rasheed@gmail.com
In reply to: Madeleine Thompson (#6)
hackersbugs
Re: BUG #15307: Low numerical precision of (Co-) Variance

On 27 September 2018 at 06:12, Madeleine Thompson <madeleineth@gmail.com> wrote:

This is my first PostgreSQL review. Hopefully I'm getting it right. I independently ran into the bug in question and found that the author had already written a patch. I'm happy the author wrote this.

Thanks for the review. And yes, this sort of review is exactly what we need.

(1) I am not experienced with PostgreSQL internals, so I can't comment on the coding style or usage of internal functions.

(2) The patch applies cleanly to ce4887bd025b95c7b455fefd817a418844c6aad3. "make check", "make check-world", and "make install" pass.

(3) The patch addresses the numeric inaccuracy reported in the bug. (Yay!) I believe the tests are sufficient to confirm that it works as intended. I don't think the test coverage *before* this patch was sufficient, and I appreciate the improved test coverage of the combine functions. I verified the new tests manually.

Excellent. Thanks for testing.

(4) The comments cite Youngs & Cramer (1971). This is a dated citation. It justifies its algorithms based on pre-IEEE754 single-precision arithmetic. Most notably, it assumes that floating-point division is not exact. That said, the algorithm is still a good choice; it is justified now because compared to the standard Welford variance, it is less prone to causing pipeline stalls on a modern CPU. Maybe link to Schubert & Gentz 2018 (https://dl.acm.org/citation.cfm?id=3223036) instead. The new float8_combine combine code is (almost) S&G eqn. 22; the float8_accum code is S&G eqn. 28. float8_regr_accum and float8_regr_combine are S&G eqn. 22.

Hmm, I think that Youngs & Cramer should be cited as the original
inventors of the algorithm (which pre-dates the first version of
IEEE754 by a few years), although I'm happy to also cite Schubert &
Gentz as a more modern justification for the choice of algorithm.

Regarding the combine functions, I think perhaps Chan, Golub & LeVeque
[1]: Updating Formulae and a Pairwise Algorithm for Computing Sample Variances, T. F. Chan, G. H. Golub & R. J. LeVeque, COMPSTAT 1982.
variance, not covariance, but that's a straightforward generalisation
of their work.

[1]: Updating Formulae and a Pairwise Algorithm for Computing Sample Variances, T. F. Chan, G. H. Golub & R. J. LeVeque, COMPSTAT 1982.
Variances, T. F. Chan, G. H. Golub & R. J. LeVeque, COMPSTAT 1982.

(5) I think the comment /* Watch out for roundoff error producing a negative variance */ and associated check are obsolete now.

Ah, good point. We clearly only ever add positive quantities to Sxx
and Syy, and likewise when they are combined, so there is no way that
they can ever become negative now. Another neat consequence of this
algorithm.

I'll post an updated patch in a while.

Regards,
Dean

#9Dean Rasheed
dean.a.rasheed@gmail.com
In reply to: Dean Rasheed (#8)
hackersbugs
Re: BUG #15307: Low numerical precision of (Co-) Variance

On Thu, 27 Sep 2018 at 14:22, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:

I'll post an updated patch in a while.

I finally got round to looking at this again. Here is an update, based
on the review comments.

The next question is whether or not to back-patch this. Although this
was reported as a bug, my inclination is to apply this to HEAD only,
based on the lack of prior complaints. That also matches our decision
for other similar patches, e.g., 7d9a4737c2.

Regards,
Dean

Attachments:

implement-float-aggs-using-youngs-cramer-v3.patchtext/x-patch; charset=US-ASCII; name=implement-float-aggs-using-youngs-cramer-v3.patchDownload+639-262
#10Tom Lane
tgl@sss.pgh.pa.us
In reply to: Dean Rasheed (#9)
hackersbugs
Re: BUG #15307: Low numerical precision of (Co-) Variance

Dean Rasheed <dean.a.rasheed@gmail.com> writes:

The next question is whether or not to back-patch this. Although this
was reported as a bug, my inclination is to apply this to HEAD only,
based on the lack of prior complaints. That also matches our decision
for other similar patches, e.g., 7d9a4737c2.

+1 for no back-patch. Even though the new results are hopefully more
accurate, they're still different from before, and that might cause
issues for somebody if it happens in a stable branch.

regards, tom lane

#11Madeleine Thompson
madeleineth@gmail.com
In reply to: Dean Rasheed (#9)
hackersbugs
Re: BUG #15307: Low numerical precision of (Co-) Variance

On Wed, Oct 3, 2018 at 9:04 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:

On Thu, 27 Sep 2018 at 14:22, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:

I'll post an updated patch in a while.

I finally got round to looking at this again. Here is an update, based
on the review comments.

The next question is whether or not to back-patch this. Although this
was reported as a bug, my inclination is to apply this to HEAD only,
based on the lack of prior complaints. That also matches our decision
for other similar patches, e.g., 7d9a4737c2.

This diff looks good to me. Also, it applies cleanly against
abd9ca377d669a6e0560e854d7e987438d0e612e and passes `make
check-world`.

I agree that this is not suitable for a patch release.

Madeleine

#12Dean Rasheed
dean.a.rasheed@gmail.com
In reply to: Madeleine Thompson (#11)
hackersbugs
Re: BUG #15307: Low numerical precision of (Co-) Variance

On Wed, 3 Oct 2018 at 15:58, Madeleine Thompson <madeleineth@gmail.com> wrote:

This diff looks good to me. Also, it applies cleanly against
abd9ca377d669a6e0560e854d7e987438d0e612e and passes `make
check-world`.

I agree that this is not suitable for a patch release.

Pushed to master. Thanks for the review.

Regards,
Dean