Inaccurate results from numeric ln(), log(), exp() and pow()
Hi,
I recently noticed that numeric log() produces inaccurate results for
certain ranges of inputs. This isn't just small errors in the last 1
or 2 digits, but sometimes quite large errors, with over half the
significant digits returned being incorrect.
The initial issue was a block of code to estimate the weight of the
logarithm, repeated in 3 different places, and called from ln(), log()
and pow(). After coming up with a fix for that, however, testing
revealed a whole range of additional issues causing further
inaccuracies in those functions and also in exp().
In some cases numeric pow() and exp() have even larger errors than
their floating point counterparts, and for some inputs the errors in
numeric pow() grow to the point where they trigger a division-by-zero
error when there is actually a perfectly well-defined answer.
The full details are listed below, and a WIP patch is attached. It
might well be sensible to break this patch up, but for now it was
easier to test with all the fixes in one place. Most of the patch is
actually test cases.
One question this does raise though is just how accurate do we expect
these functions to be? I've taken the view that, however many digits
they decide to return, they all ought to be correct (or as close as we
can reasonably get). For most "everyday" sized numbers, that makes
sense, but there are extreme cases where it might be considered over
the top, such as when the magnitude of the result is much larger than
the inputs.
With my patch, I get correctly rounded results exact to the full
precision returned in all the cases that I tested [1]NOTE: I don't claim that this patch always generates correct answers with correct rounding down to the last digit. To guarantee that, you'd have to do a lot more work -- accurately tracking the sizes of errors throughout the computations, testing whether the computed result *can* actually be rounded accurately (uniquely) to the required precision, and if necessary re-computing the whole thing to a higher internal precision to get correct rounding (c.f., the code in the MPFR library).. For "everyday"
sized numbers, performance is also around the same, and in a few cases
I tested it was 3 or 4 times faster, where I've hacked code that was
obviously inefficient as well as inaccurate. However, in the most
extreme cases, the patch does make these functions considerably
slower.
For example, exp() works for inputs up to 6000. However, if you
compute exp(5999.999) the answer is truly huge -- probably only of
academic interest to anyone. With HEAD, exp(5999.999) produces a
number with 2609 significant digits in just 1.5ms (on my ageing
desktop box). However, only the first 9 digits returned are correct.
The other 2600 digits are pure noise. With my patch, all 2609 digits
are correct (confirmed using bc), but it takes 27ms to compute, making
it 18x slower.
AFAICT, this kind of slowdown only happens in cases like this where a
very large number of digits are being returned. It's not obvious what
we should be doing in cases like this. Is a performance reduction like
that acceptable to generate the correct answer? Or should we try to
produce a more approximate result more quickly, and where do we draw
the line?
Thoughts?
Regards,
Dean
[1]: NOTE: I don't claim that this patch always generates correct answers with correct rounding down to the last digit. To guarantee that, you'd have to do a lot more work -- accurately tracking the sizes of errors throughout the computations, testing whether the computed result *can* actually be rounded accurately (uniquely) to the required precision, and if necessary re-computing the whole thing to a higher internal precision to get correct rounding (c.f., the code in the MPFR library).
answers with correct rounding down to the last digit. To guarantee
that, you'd have to do a lot more work -- accurately tracking the
sizes of errors throughout the computations, testing whether the
computed result *can* actually be rounded accurately (uniquely) to the
required precision, and if necessary re-computing the whole thing to a
higher internal precision to get correct rounding (c.f., the code in
the MPFR library).
What the patch does try to do is generate correctly rounded answers
*most of the time*. There will be cases (though I haven't managed to
find any) where the result is slightly off, but I expect this to just
be a +/-1 error in the final digit, since the bounds of the errors in
the calculations are fairly well understood, barring further bugs of
course.
Here are the issues that I identified:
Logarithms with inputs close to 1
=================================
Example:
select log(1.00000001);
log
--------------------------------
0.0000000043429447973177941813
xxxx
0.0000000043429447973177943261 <-- Correct answer
The last 4 digits indicated are incorrect, although if its choice of
precision were consistent with other numeric functions, it would only
return the first 16 significant digits, which are actually correct.
For other values closer to 1, not even the first 16 significant digits
are correct. For example:
select log(1.000000000003);
log
------------------------------------
0.00000000000130288344570801830503
xxxxxxxxx
0.00000000000130288344570780115778 <-- Correct answer
although again it should probably only return the first 16 significant
digits in this case.
Similarly, if the base is close to 1, so that internally it divides by
a number close to zero, the result loses accuracy. For example:
select log(1.000000000003, 123456789012345);
log
--------------------------------
10815637441410.762212735610000
xx.xxxxxxxxxxxxxxx
10815637441426.985668897718893 <-- Correct answer
i.e., more than half the digits returned are incorrect.
There are 2 separate issues in log_var():
Firstly the computation of the rscale for the ln() calculations
doesn't consider the case where the input is close to 1, for which the
result will be very small (large negative weight). So the rscale used
is too small to accurately represent the intermediate ln() results.
For the same reason, the numeric ln() function produces low precision
results (not respecting NUMERIC_MIN_SIG_DIGITS = 16) for such inputs,
for example:
select ln(1.000000000003);
ln
--------------------
0.0000000000030000
which is technically correct, but if NUMERIC_MIN_SIG_DIGITS were
respected, it would produce
0.000000000002999999999995500
Secondly, it is wrong to use the same rscale for both ln()
computations, since their results may have completely different
weights, and so the number of significant digits in each intermediate
ln() result may be completely different. For this kind of computation
where two intermediate results are being multiplied or divided, the
number of significant digits kept in each number should be
approximately the same, since the accuracy of the final result is
limited to the minimum of the number of significant digits in the two
intermediate results (relative errors are additive under
multiplication and division). That can be seen with a bit of simple
maths [2]A repeated theme in this patch is that rounding errors from multiplying or dividing intermediate results should be minimised by computing the intermediate results to the same number of significant digits, not to the same rscale..
So ideally each intermediate result should have the same number of
significant digits, which should be a little larger than the number of
significant digits required in the final result.
The patch adds a new function estimate_ln_weight() to better estimate
the weight of ln(x), which is needed in a few places. Using this
function, it is possible to work out the required rscale of the final
log() result, and how many significant digits it will contain. From
that it is then possible to estimate the required rscales of the two
intermediate ln() results, which will in general be different from one
another.
This produces correctly rounded results for all the examples above,
and also makes the result precision more consistent with
NUMERIC_MIN_SIG_DIGITS.
pow() with non-integer exponents
================================
For non-integer exponents, power_var() uses ln() and has an almost
identical block of code to estimate the ln() weight. However, simply
replacing that code block with the new ln() weight estimating function
is insufficient. An additional problem is that there is no easy way to
estimate the weight of the final answer, and hence what local rscale
to use. The old code just arbitrarily doubled the rscale in the hope
that that would be enough, but in fact that is over the top in some
cases and nowhere near sufficient in others.
The patch estimates the result weight by first computing a few digits
of the intermediate ln() result and multiplying by the exponent. From
that it is possible to work out how many significant digits are
actually needed for the full-precision ln(). In some cases this
reduces the local rscale, and in others it increases it.
For example:
select 32.1 ^ 9.8;
?column?
--------------------
580429286752914.88
xxxxx.xx
580429286790711.10 <-- Correct answer
The first 10 digits are correct and the last 7 are incorrect. For
comparison, floating point arithmetic produces 14 correct digits, with
an error of just 2 in the final digit:
select 32.1::float8 ^ 9.8::float8;
?column?
-----------------
580429286790713
x
However, in this case all the inaccuracy of the numeric pow() result
is due to flaws in numeric exp() described below, and the intermediate
result is actually correct. In this case the patch causes a lower
local rscale to be used to compute the intermediate result, and still
produces an accurate value for it.
With larger inputs, however, it is necessary to increase the local
rscale to calculate ln_num accurately. For example:
select 12.3 ^ 45.6;
?column?
------------------------------------------------------
50081010343900755147725208390673045971755589376518.1
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.x
50081010321492803393171165777624533697036806969694.9 <-- Correct answer
The first 9 digits are correct and the last 42 are incorrect. Again,
floating point arithmetic produces a more accurate result:
select 12.3::float8 ^ 45.6::float8;
?column?
----------------------
5.00810103214931e+49
xx
In this case the patch increases the local rscale used to compute the logarithm.
pow() with integer exponents
============================
For integer exponents, power_var() goes through power_var_int() which
uses repeated multiplication rather than ln() and exp(). This suffers
from a similar problem - all the multiplications are done using the
same local rscale, despite the fact that the weight of the numbers
(and hence the number of significant digits) changes rapidly as the
result is calculated, and thus accuracy is lost. For example:
select 3.789 ^ 21;
?column?
--------------------------------
1409343026052.8716016316021821
xxxx
1409343026052.8716016316022141 <-- Correct answer
The first 25 digits are correct and the last 4 are incorrect.
Increasing the exponent amplifies this inaccuracy:
select 3.789 ^ 35;
?column?
----------------------------------------
177158169650516670809.3820505911993004
xxxxxxxxxxx
177158169650516670809.3820586142670135 <-- Correct answer
select 1.2 ^ 345;
?column?
-----------------------------------------------
2077446682327378559843543776.7195509332497062
xxxxxx.xxxxxxxxxxxxxxxx
2077446682327378559843444695.5827049735727869 <-- Correct answer
With negative exponents power_var_int() takes the reciprocal of the
result of all the multiplications. If the magnitude of the input
number is less than 1, the successive multiplications cause it to
become smaller and smaller, with more and more precision lost as the
intermediate results are rounded to the local rscale, so the end
result is even more inaccurate. For example:
select 0.12 ^ (-20);
?column?
--------------------------------------
2631578947368421052.6315789473684211
xxxxxxxxxxxxxxxxx.xxxxxxxxxxxxxxxx
2608405330458882702.5529619561355838 <-- Correct answer
Pushing this example a bit further, the numeric computation fails
completely when the exponent is large enough to make the intermediate
result zero:
select 0.12 ^ (-25);
ERROR: division by zero
The patch works by first producing an estimate for the weight (and
hence the number of significant digits) of the final result. Then each
multiplication is done with a local rscale sufficient to give each
intermediate result the appropriate number of significant digits
(greater than the number in the final result). Thus the local rscale
increases or decreases as the calculation proceeds, depending on
whether the magnitude of the input base is less than or greater than
1.
Since relative errors are additive under multiplication, the relative
error of this calculation may grow proportionally with the exponent,
which means the number of extra significant digits required in the
intermediate multiplications grows with the exponent roughly as
log10(exp).
Exponential function
====================
For inputs larger than 1, exp() uses power_var_int() for the integer
part of the input, and so suffers from a similar loss of accuracy:
select exp(32.999);
exp
---------------------
214429043491706.688
xxxx.xxx
214429043492155.053 <-- Correct answer
The first 11 digits are correct and the last 7 are incorrect. For
comparison, floating point arithmetic produces 14 correct digits, with
an error of just 1 in the final digit:
select exp(32.999::float8);
exp
-----------------
214429043492156
x
The fix above to power_var_int() fixes this example, however, by
itself it is not sufficient to fix all the issues with exp(). For
example:
select exp(123.456);
exp
------------------------------------------------------------
413294435274616618690601433419877395510217646433265686.666
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.xxx
413294435277809344957685441227343146614594393746575438.725 <-- Correct answer
With the fix to power_var_int() the result becomes slightly more accurate:
select exp(123.456);
exp
------------------------------------------------------------
413294435277809345183529335926669946765586864571855843.203
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.xxx
but the majority of the result digits are still incorrect. The reason
is essentially the same -- exp_var() in this example computes
exp(0.456), then computes exp(1) and raises it to the power of 123 to
give exp(123), and then multiplies them together to give the final
result. However, this is all done using the same local rscale, despite
the fact that each intermediate result has a completely different
weight. So in this case, the calculation of exp(0.456) produces the
smallest number of significant digits, which then restricts the
overall accuracy of the result.
To fix the above algorithm, both exp(1) and exp(0.456) would have to
be computed using much larger rscales, as would the code to raise
exp(1) to the power of 123. However, a much simpler, more efficient
and more accurate method is to replace exp_var() and
exp_var_internal() with a single function that works for all inputs
using a single Taylor series expansion, rather than doing separate
calculations for the integer and fractional parts of the input.
More specifically, first divide the input by an appropriate power of 2
to reduce it to a size for which the Taylor series converges
relatively quickly. Given that the input is at most 6000, and the
current choice of input range reduction reduces it to be less than
0.01, an equivalent single-step input range reduction would involve
dividing by 2^n where n is at most 20.
Then the Taylor series can be used to compute exp(x/2^n), and the
final result computed by raising that to the power of 2^n by squaring
n times. As above, each step needs to be done using a different local
rscale based on an estimate for the weight of the final result, and
the requested rscale.
In addition the Taylor series computation in exp_var_internal() is
particularly naive. The n'th term of the series is x^n/n!, but it
isn't actually necessary to compute values for x^n or n! (which gets
large very quickly). Instead, simply note that the n'th term of the
series is equal to x/n times the previous term.
Finally, for negative exponents, exp_var() first computes exp(Abs(x)),
then takes the reciprocal of the result. This is a waste of time,
particularly when Abs(x) is very large. Instead, just run the
algorithm above, which works perfectly fine regardless of the sign of
x.
Bug in mul_var() input truncation
=================================
Testing pow() with very large inputs reveals a bug in mul_var(), for
example in the final squaring in the computation of 1.234 ^ 8192. The
problem is with the input-truncation code:
/*
* Determine number of result digits to compute. If the exact result
* would have more than rscale fractional digits, truncate the computation
* with MUL_GUARD_DIGITS guard digits. We do that by pretending that one
* or both inputs have fewer digits than they really do.
*/
the code that follows appears to be completely bogus.
Firstly the input-truncation code inside the "res_ndigits > maxdigits"
block makes no sense at all, and it doesn't seem like it would ever be
safe to discard that many digits from the input numbers. A quick
back-of-the-envelope sketch, using a parallelogram to represent the
product, suggests that it ought to be something more like
var1ndigits = Min(var1ndigits, maxdigits)
var2ndigits = Min(var2ndigits, maxdigits)
That doesn't kick in until number of digits required (maxdigits) is
smaller than the number of digits in either of the inputs (not when
it's smaller than the number of digits in the result).
Secondly the computation of maxdigits is incorrect:
maxdigits = res_weight + 1 + (rscale * DEC_DIGITS) + MUL_GUARD_DIGITS;
rscale is in decimal digits, and the other variables are all in units
of DEC_DIGITS digits, so it ought to be dividing by DEC_DIGITS, not
multiplying by it. This has the effect of overestimating the value of
maxdigits, reducing the chances of the truncation code kicking in,
which explains why this hasn't been seen before.
However, the new code in the patch now routinely calls mul_var() with
a requested rscale smaller than the input number scales (in order to
return the same number of significant digits), and so unlike the
pre-existing code it can trip over the bogus input-truncation code.
The patch completely removes this input-truncation code, partly
because I'm not entirely confident in the "back-of-the-envelope
sketch" above, and partly because it's unlikely that truncating the
inputs will be possible very often with the new code anyway. The new
code typically involves multiplying 2 numbers that each have the same
number of significant digits, and rounding the result to that same
number of significant digits, in which case truncation isn't possible.
Logarithms with inputs close to 0
=================================
For very small or very large inputs ln_var() reduces the range of the
input to 0.9 < x < 1.1 by repeatedly calling sqrt(). Besides being
inefficient, this produces further inaccuracies for very small inputs
(large scales). For example:
select log(1e-99);
log
---------------------------------------------------------------------------------------------------------
-98.999999999999999999999999999999999999999999999999999999999999999999888603671163628277631318319840599
Obviously the result should be exactly -99, so the last 33 digits are
incorrect. Similarly:
select ln(1e-99);
ln
----------------------------------------------------------------------------------------------------------
-227.955924206410522717781154013752056552509047374248524627299465264965210522829711384869465383058886342
The correct answer to that precision is
-227.955924206410522717781154013752056552509047374248524627299462195789688358057895543363723303870231536
so the last 40 digits are incorrect.
The patch solves this by first reducing the weight of the input number
by multiplying or dividing by a power of 10, and then adding a
matching multiple of log(10) to the final result. This reduces the
number of sqrt()'s needed, preventing this loss of accuracy.
An alternate method of reducing the input range, instead of repeated
use of sqrt(), is to find an approximate value for the ln() result
(e.g, by using floating point arithmetic) and divide by its exp(). The
result is then much closer to 1, which improves the convergence rate
of the Taylor series. However, it doesn't seem like a good idea to
make ln() dependent on exp().
---
[2]: A repeated theme in this patch is that rounding errors from multiplying or dividing intermediate results should be minimised by computing the intermediate results to the same number of significant digits, not to the same rscale.
multiplying or dividing intermediate results should be minimised by
computing the intermediate results to the same number of significant
digits, not to the same rscale.
For example, consider the computation of
result = log(base, num)
which is done as follows:
x = ln(num)
y = ln(base)
result = x / y
In general, x and y cannot be represented exactly, so what is actually
computed is x' and y' -- approximations to x and y, rounded to a
certain precision. This rounding will introduce small errors that can
be written as follows:
x' = x * (1 + dx)
y' = y * (1 + dy)
where dx and dy are the relative errors in x' and y'. So, for example,
if x' has 20 significant digits, dx would be of order 10^-20. Note
that dx and dy may be positive or negative, but their magnitudes
should be much less than 1, depending on the number of significant
digits kept in x' and y'.
The result is computed by dividing x' by y', giving an approximation
to the exact result:
result' = x' / y'
= x * (1 + dx) / y / (1 + dy)
= x / y * (1 + dx) * (1 - dy + O(dy^2))
Since the magnitudes of dx and dy are much smaller than 1, higher
order factors can be neglected, giving the following approximation to
the overall result:
result' = result * (1 + dx - dy)
so in the worst case the relative error in the final result is
max_error = Abs(dx) + Abs(dy)
which is dominated by whichever intermediate relative error is larger
in magnitude. For example, if x' had 20 significant digits and y' had
9, dx would have a magnitude of order 10^-20, and dy would have a
magnitude of order 10^-9, and the worst-case relative error in the
final result would have a magnitude of order 10^-9, the same as dy.
Computing x' to more significant digits, without increasing the
precision of y', would have no effect on the overall accuracy of the
result, and is just a waste of effort.
Thus the optimum strategy is to compute x' and y' to the same number
of significant digits (slightly larger than the number of significant
digits required in the final result). This requires having reasonable
estimates for the weights of x and y, and the final result.
The same applies to calculations involving multiplying intermediate
results. Similarly when computing x^n using n multiplications, the
worst-case relative error in the result is around n*Abs(dx).
Attachments:
numeric.patchtext/x-diff; charset=US-ASCII; name=numeric.patchDownload+1876-211
On Wed, Sep 16, 2015 at 2:31 AM, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:
Hi,
I recently noticed that numeric log() produces inaccurate results for
certain ranges of inputs. This isn't just small errors in the last 1
or 2 digits, but sometimes quite large errors, with over half the
significant digits returned being incorrect.
yikes.
The initial issue was a block of code to estimate the weight of the
logarithm, repeated in 3 different places, and called from ln(), log()
and pow(). After coming up with a fix for that, however, testing
revealed a whole range of additional issues causing further
inaccuracies in those functions and also in exp().
<snip>
AFAICT, this kind of slowdown only happens in cases like this where a
very large number of digits are being returned.
Can you clarify "very large"?
merlin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Dean Rasheed <dean.a.rasheed@gmail.com> writes:
... For example, exp() works for inputs up to 6000. However, if you
compute exp(5999.999) the answer is truly huge -- probably only of
academic interest to anyone. With HEAD, exp(5999.999) produces a
number with 2609 significant digits in just 1.5ms (on my ageing
desktop box). However, only the first 9 digits returned are correct.
The other 2600 digits are pure noise. With my patch, all 2609 digits
are correct (confirmed using bc), but it takes 27ms to compute, making
it 18x slower.
AFAICT, this kind of slowdown only happens in cases like this where a
very large number of digits are being returned. It's not obvious what
we should be doing in cases like this. Is a performance reduction like
that acceptable to generate the correct answer? Or should we try to
produce a more approximate result more quickly, and where do we draw
the line?
FWIW, in that particular example I'd happily take the 27ms time to get
the more accurate answer. If it were 270ms, maybe not. I think my
initial reaction to this patch is "are there any cases where it makes
things 100x slower ... especially for non-outrageous inputs?" If not,
sure, let's go for more accuracy.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 16 September 2015 at 09:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Dean Rasheed <dean.a.rasheed@gmail.com> writes:
... For example, exp() works for inputs up to 6000. However, if you
compute exp(5999.999) the answer is truly huge -- probably only of
academic interest to anyone. With HEAD, exp(5999.999) produces a
number with 2609 significant digits in just 1.5ms (on my ageing
desktop box). However, only the first 9 digits returned are correct.
The other 2600 digits are pure noise. With my patch, all 2609 digits
are correct (confirmed using bc), but it takes 27ms to compute, making
it 18x slower.AFAICT, this kind of slowdown only happens in cases like this where a
very large number of digits are being returned. It's not obvious what
we should be doing in cases like this. Is a performance reduction like
that acceptable to generate the correct answer? Or should we try to
produce a more approximate result more quickly, and where do we draw
the line?FWIW, in that particular example I'd happily take the 27ms time to get
the more accurate answer. If it were 270ms, maybe not. I think my
initial reaction to this patch is "are there any cases where it makes
things 100x slower ... especially for non-outrageous inputs?" If not,
sure, let's go for more accuracy.
Agreed
Hopefully things can be made faster with less significant digits.
I figure this is important enough to trigger a maint release, but since we
already agreed when the next one is, I don't see we need to do it any
quicker, do we?
Well done Dean for excellent research.
--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 16 September 2015 at 14:49, Merlin Moncure <mmoncure@gmail.com> wrote:
AFAICT, this kind of slowdown only happens in cases like this where a
very large number of digits are being returned.Can you clarify "very large"?
I haven't done much performance testing because I've been mainly
focussed on accuracy. I just did a quick test of exp() for various
result sizes. For results up to around 50 digits, the patched code was
twice as fast as HEAD. After that the gap narrows until at around 250
digits they become about the same speed, and beyond that the patched
code is slower. At around 450 digits the patched code is twice as
slow.
My guess is that no one is actually using it for numbers that large.
Regards,
Dean
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Sep 16, 2015 at 10:14 AM, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:
On 16 September 2015 at 14:49, Merlin Moncure <mmoncure@gmail.com> wrote:
AFAICT, this kind of slowdown only happens in cases like this where a
very large number of digits are being returned.Can you clarify "very large"?
I haven't done much performance testing because I've been mainly
focussed on accuracy. I just did a quick test of exp() for various
result sizes. For results up to around 50 digits, the patched code was
twice as fast as HEAD. After that the gap narrows until at around 250
digits they become about the same speed, and beyond that the patched
code is slower. At around 450 digits the patched code is twice as
slow.My guess is that no one is actually using it for numbers that large.
well, I'm sold :-).
(I certainly agree that a slow inaccurate answer is better than a fast
inaccurate one, but it's nice to know that reasonable users of these
functions won't be impacted)
merlin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 16 September 2015 at 16:14, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:
On 16 September 2015 at 14:49, Merlin Moncure <mmoncure@gmail.com> wrote:
AFAICT, this kind of slowdown only happens in cases like this where a
very large number of digits are being returned.Can you clarify "very large"?
I haven't done much performance testing because I've been mainly
focussed on accuracy. I just did a quick test of exp() for various
result sizes. For results up to around 50 digits, the patched code was
twice as fast as HEAD. After that the gap narrows until at around 250
digits they become about the same speed, and beyond that the patched
code is slower. At around 450 digits the patched code is twice as
slow.
OK, scratch that. I managed to compare an optimised build with a debug
build somehow.
Comparing 2 optimised builds, it's still 2x faster at computing 16 or
17 digits, becomes about the same speed at 30 digits, and is 2x slower
at around 60 digits. So not nearly as impressive, but not too bad
either.
I'll try to do some more comprehensive performance testing over the
next few days.
Regards,
Dean
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 16 September 2015 at 15:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:
FWIW, in that particular example I'd happily take the 27ms time to get
the more accurate answer. If it were 270ms, maybe not. I think my
initial reaction to this patch is "are there any cases where it makes
things 100x slower ... especially for non-outrageous inputs?" If not,
sure, let's go for more accuracy.
On 16 September 2015 at 17:03, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:
I'll try to do some more comprehensive performance testing over the
next few days.
I've done some more performance testing, and the results are broadly
in line with my initial expectations. There are a couple of cases
where pow() with non-integer powers is hundreds of times slower. This
happens when inputs with small numbers of significant digits generate
results with thousands of digits, that the code in HEAD doesn't
calculate accurately. However, there do not appear to be any cases
where this happens for "non-outrageous" inputs.
There are also cases where the new code is hundreds or even thousands
of times faster, mainly due to it making better choices for the local
rscale, and the reduced use of sqrt() in ln_var().
I wrote a script to test each function with a range of inputs, some
straightforward, and some intentionally difficult to compute. Attached
is the script's output. The columns in the output are:
* Function being called.
* The input(s) passed to it.
* Number of significant digits in the inputs (not counting trailing zeros).
* Number of significant digits in the output (HEAD vs patched code).
* Number of output digits on the right that differ between the two.
* Average function call time in HEAD.
* Average function call time with the patch.
* How many times faster or slower the patched code is.
There is a huge spread of function call times, both before and after
the patch, and the overall performance profile has changed
significantly, but in general the patched code is faster more often
than it is slower, especially for "non-outrageous" inputs.
All the cases where it is significantly slower are when the result is
significantly more accurate, but it isn't always slower to generate
more accurate results.
These results are based on the attached, updated patch which includes
a few minor improvements. The main changes are:
* In mul_var() instead of just ripping out the faulty input truncation
code, I've now replaced it with code that correctly truncates the
inputs as much as possible when the exact answer isn't required. This
produces a noticeable speedup in a number of cases. For example it
reduced the time to compute exp(5999.999) from 27ms to 20ms.
* Also in mul_var(), the simple measure of swapping the inputs so that
var1 is always the number with fewer digits, produces a worthwhile
benefit. This further reduced the time to compute exp(5999.999) to
17ms.
There's more that could be done to improve multiplication performance,
but I think that's out of scope for this patch.
Regards,
Dean
Dean Rasheed <dean.a.rasheed@gmail.com> writes:
These results are based on the attached, updated patch which includes
a few minor improvements.
I started to look at this patch, and was immediately bemused by the
comment in estimate_ln_weight:
/*
* 0.9 <= var <= 1.1
*
* Its logarithm has a negative weight (possibly very large). Estimate
* it using ln(var) = ln(1+x) = x + O(x^2) ~= x.
*/
This comment's nonsense of course: ln(0.9) is about -0.105, and ln(1.1) is
about 0.095, which is not even negative much less "very large". We could
just replace that with "For values reasonably close to 1, we can estimate
the result using ln(var) = ln(1+x) ~= x." I am wondering what's the point
though: why not flush this entire branch in favor of always using the
generic case? I rather doubt that on any modern machine two uses of
cmp_var plus init_var/sub_var/free_var are going to be cheaper than the
log() call you save. You would need a different way to special-case 1.0,
but that's not expensive.
Larger questions are
(1) why the Abs() in the specification of estimate_ln_weight --- that
doesn't comport with the text about "Estimate the weight of the most
significant digit". (The branch I'm proposing you remove fails to
honor that anyway.)
(2) what should the behavior be for input == 1, and why? The code
is returning zero, but that seems inconsistent or at least
underdocumented.
(3) if it's throwing an error for zero input, why not for negative
input? I'm not sure that either behavior is within the purview of
this function, anyway. Seems like returning zero might be fine.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 12 November 2015 at 21:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Dean Rasheed <dean.a.rasheed@gmail.com> writes:
These results are based on the attached, updated patch which includes
a few minor improvements.I started to look at this patch, and was immediately bemused by the
comment in estimate_ln_weight:/*
* 0.9 <= var <= 1.1
*
* Its logarithm has a negative weight (possibly very large). Estimate
* it using ln(var) = ln(1+x) = x + O(x^2) ~= x.
*/This comment's nonsense of course: ln(0.9) is about -0.105, and ln(1.1) is
about 0.095, which is not even negative much less "very large".
That's nonsense. The comment is perfectly correct. It's not saying the
logarithm is negative, it's saying that the *weight* of the logarithm
is negative. For the examples you cite, the weight is small and
negative (around -1), but for inputs closer to 1 it can be large and
negative.
We could
just replace that with "For values reasonably close to 1, we can estimate
the result using ln(var) = ln(1+x) ~= x." I am wondering what's the point
though: why not flush this entire branch in favor of always using the
generic case? I rather doubt that on any modern machine two uses of
cmp_var plus init_var/sub_var/free_var are going to be cheaper than the
log() call you save. You would need a different way to special-case 1.0,
but that's not expensive.
No, you're missing the point entirely. Suppose var =
1.000000000000000000001, in which case ln(var) is approximately 1e-21.
The code in the first branch uses the approximation ln(var) = ln(1+x)
~= x = var-1 to see that the weight of ln(var) is around -21. You
can't do that with the code in second branch just by looking at the
first couple of digits of var and doing a floating point
approximation, because all you'd see would be 1.0000000 and your
approximation to the logarithm would be orders of magnitude out (as is
the case with the code that this function is replacing, which comes
with no explanatory comments at all).
Larger questions are
(1) why the Abs() in the specification of estimate_ln_weight --- that
doesn't comport with the text about "Estimate the weight of the most
significant digit".
Yes it does, and the formula is correct. The function returns an
estimate for the weight of the logarithm of var. Let L = ln(var), then
what we are trying to return is the weight of L. So we don't care
about the sign of L, we just need to know it's weight -- i.e., we want
approximately log10(Abs(L)) which is log10(Abs(ln(var))) as the
comment says. The Abs() is necessary because L might be negative (when
0 < var < 1).
(The branch I'm proposing you remove fails to
honor that anyway.)
No it doesn't. It honours the Abs() by ignoring the sign of x, and
just looking at its weight.
(2) what should the behavior be for input == 1, and why? The code
is returning zero, but that seems inconsistent or at least
underdocumented.
ln(1) is 0, which has a weight of 0. I suppose you could argue that
technically 0 could have any weight, but looking at the bigger
picture, what this function is for is deciding how many digits of
precision are needed in intermediate calculations to retain full
precision in the final result. When the input is exactly 1, the
callers will have nice exact results, and no extra intermediate
precision is needed, so returning 0 is quite sensible.
I guess adding words to that effect to the comment makes sense, since
it clearly wasn't obvious.
(3) if it's throwing an error for zero input, why not for negative
input? I'm not sure that either behavior is within the purview of
this function, anyway. Seems like returning zero might be fine.regards, tom lane
[Shrug] It doesn't make much difference since both those error cases
will be caught a little later on in the callers, but since this
function needs to test for an empty digit array in the second branch
anyway, it seemed like it might as well report that error there.
Returning zero would be fine too.
Regards,
Dean
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Dean Rasheed <dean.a.rasheed@gmail.com> writes:
On 12 November 2015 at 21:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I started to look at this patch, and was immediately bemused by the
comment in estimate_ln_weight:
That's nonsense. The comment is perfectly correct. It's not saying the
logarithm is negative, it's saying that the *weight* of the logarithm
is negative.
Ah, you're right --- I'd gotten confused about the distinction between
ln(x) and ln(ln(x)). Nevermind ...
Next question: in the additional range-reduction step you added to ln_var,
why stop there, ie, what's the rationale for this magic number:
if (Abs((x.weight + 1) * DEC_DIGITS) > 10)
Seems like we arguably should do this whenever the weight isn't zero,
so as to minimize the number of sqrt() steps. (Yes, I see the point
about not getting into infinite recursion, but that only says that
the threshold needs to be more than 10, not that it needs to be 10^10.)
Also, it seems a little odd to put the recursive calculation of ln(10)
where you did, rather than down where it's used, ie why not
mul_var(result, &fact, result, local_rscale);
ln_var(&const_ten, &ln_10, local_rscale);
int64_to_numericvar((int64) pow_10, &ni);
mul_var(&ln_10, &ni, &xx, local_rscale);
add_var(result, &xx, result);
round_var(result, rscale);
As you have it, ln_10 will be calculated with possibly a smaller rscale
than is used in this stanza. That might be all right but it seems dubious
--- couldn't the lower-precision result leak into digits we care about?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
BTW, something I find confusing and error-prone is that this patch keeps
on using the term "weight" to refer to numbers expressed in decimal digits
(ie, the approximate log10 of something). Basically everywhere in the
existing code, "weights" are measured in base-NBASE digits, while "scales"
are measured in decimal digits. I've not yet come across anyplace where
you got the units wrong, but it seems like a gotcha waiting to bite the
next hacker.
I thought for a bit about s/weight/scale/g in the patch, but that is not
le mot juste either, since weight is generally counting digits to the left
of the decimal point while scale is generally counting digits to the
right.
The best idea that has come to me is to use "dweight" to refer to a weight
measured in decimal digits. Anyone have a better thought?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 13 November 2015 at 18:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Next question: in the additional range-reduction step you added to ln_var,
why stop there, ie, what's the rationale for this magic number:if (Abs((x.weight + 1) * DEC_DIGITS) > 10)
Seems like we arguably should do this whenever the weight isn't zero,
so as to minimize the number of sqrt() steps. (Yes, I see the point
about not getting into infinite recursion, but that only says that
the threshold needs to be more than 10, not that it needs to be 10^10.)
It's a bit arbitrary. There is a tradeoff here -- computing ln(10) is
more expensive than doing a sqrt() since the Babylonian algorithm used
for sqrt() is quadratically convergent, whereas the Taylor series for
ln() converges more slowly. At the default precision, ln(10) is around
7 times slower than sqrt() on my machine, although that will vary with
precision, and the sqrt()s increase the local rscale and so they will
get slower. Anyway, it seemed reasonable to not do the extra ln()
unless it was going to save at least a couple of sqrt()s.
Also, it seems a little odd to put the recursive calculation of ln(10)
where you did, rather than down where it's used, ie why notmul_var(result, &fact, result, local_rscale);
ln_var(&const_ten, &ln_10, local_rscale);
int64_to_numericvar((int64) pow_10, &ni);
mul_var(&ln_10, &ni, &xx, local_rscale);
add_var(result, &xx, result);round_var(result, rscale);
As you have it, ln_10 will be calculated with possibly a smaller rscale than is used in this stanza. That might be all right but it seems dubious --- couldn't the lower-precision result leak into digits we care about?
Well it still has 8 digits more than the final rscale, so it's
unlikely to cause any loss of precision when added to the result and
then rounded to rscale. But on the other hand it does look neater to
compute it there, right where it's needed.
Regards,
Dean
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 13 November 2015 at 21:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:
BTW, something I find confusing and error-prone is that this patch keeps
on using the term "weight" to refer to numbers expressed in decimal digits
(ie, the approximate log10 of something). Basically everywhere in the
existing code, "weights" are measured in base-NBASE digits, while "scales"
are measured in decimal digits. I've not yet come across anyplace where
you got the units wrong, but it seems like a gotcha waiting to bite the
next hacker.I thought for a bit about s/weight/scale/g in the patch, but that is not
le mot juste either, since weight is generally counting digits to the left
of the decimal point while scale is generally counting digits to the
right.The best idea that has come to me is to use "dweight" to refer to a weight
measured in decimal digits. Anyone have a better thought?
Yeah, dweight works for me.
Regards,
Dean
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Dean Rasheed <dean.a.rasheed@gmail.com> writes:
On 13 November 2015 at 18:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Seems like we arguably should do this whenever the weight isn't zero,
so as to minimize the number of sqrt() steps.
It's a bit arbitrary. There is a tradeoff here -- computing ln(10) is
more expensive than doing a sqrt() since the Babylonian algorithm used
for sqrt() is quadratically convergent, whereas the Taylor series for
ln() converges more slowly. At the default precision, ln(10) is around
7 times slower than sqrt() on my machine, although that will vary with
precision, and the sqrt()s increase the local rscale and so they will
get slower. Anyway, it seemed reasonable to not do the extra ln()
unless it was going to save at least a couple of sqrt()s.
OK --- I think I miscounted how many sqrt's we could expect to save.
One more thing: the approach you used in power_var() of doing a whole
separate exp * ln(base) calculation to approximate the result weight
seems mighty expensive, even if it is done at minimal precision.
Couldn't we get a good-enough approximation using basically
numericvar_to_double_no_overflow(exp) * estimate_ln_weight(base) ?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 13 November 2015 at 23:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:
One more thing: the approach you used in power_var() of doing a whole
separate exp * ln(base) calculation to approximate the result weight
seems mighty expensive, even if it is done at minimal precision.
Couldn't we get a good-enough approximation using basically
numericvar_to_double_no_overflow(exp) * estimate_ln_weight(base) ?
I can't see a way to make that work reliably. It would need to be
10^estimate_ln_weight(base) and the problem is that both exp and
ln_weight could be too big to fit in double variables, and become
HUGE_VAL, losing all precision.
An interesting example is the limit of (1+1/x)^x as x approaches
infinity which is e (the base of natural logarithms), so in that case
both the exponent and ln_weight could be arbitrarily big (well too big
for doubles anyway). For example (1+1/1.2e+500)^(1.2e500) =
2.7182818284...
Regards,
Dean
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 14 November 2015 at 00:16, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:
I can't see a way to make that work reliably. It would need to be
10^estimate_ln_weight(base) and the problem is that both exp and
ln_weight could be too big to fit in double variables, and become
HUGE_VAL, losing all precision.
Of course I meant 10^ln_weight could be too big to fit in a double.
Regards,
Dean
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Dean Rasheed <dean.a.rasheed@gmail.com> writes:
On 14 November 2015 at 00:16, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:
I can't see a way to make that work reliably. It would need to be
10^estimate_ln_weight(base) and the problem is that both exp and
ln_weight could be too big to fit in double variables, and become
HUGE_VAL, losing all precision.
Of course I meant 10^ln_weight could be too big to fit in a double.
Drat, that's the second time in this review that I've confused
ln_weight(x) with ln(x). Time to call it a day ...
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 13 November 2015 at 21:34, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:
On 13 November 2015 at 18:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Next question: in the additional range-reduction step you added to ln_var,
why stop there, ie, what's the rationale for this magic number:if (Abs((x.weight + 1) * DEC_DIGITS) > 10)
Seems like we arguably should do this whenever the weight isn't zero,
so as to minimize the number of sqrt() steps. (Yes, I see the point
about not getting into infinite recursion, but that only says that
the threshold needs to be more than 10, not that it needs to be 10^10.)It's a bit arbitrary. There is a tradeoff here -- computing ln(10) is
more expensive than doing a sqrt() since the Babylonian algorithm used
for sqrt() is quadratically convergent, whereas the Taylor series for
ln() converges more slowly. At the default precision, ln(10) is around
7 times slower than sqrt() on my machine, although that will vary with
precision, and the sqrt()s increase the local rscale and so they will
get slower. Anyway, it seemed reasonable to not do the extra ln()
unless it was going to save at least a couple of sqrt()s.Also, it seems a little odd to put the recursive calculation of ln(10)
where you did, rather than down where it's used, ie why notmul_var(result, &fact, result, local_rscale);
ln_var(&const_ten, &ln_10, local_rscale);
int64_to_numericvar((int64) pow_10, &ni);
mul_var(&ln_10, &ni, &xx, local_rscale);
add_var(result, &xx, result);round_var(result, rscale);
As you have it, ln_10 will be calculated with possibly a smaller rscale than is used in this stanza. That might be all right but it seems dubious --- couldn't the lower-precision result leak into digits we care about?Well it still has 8 digits more than the final rscale, so it's
unlikely to cause any loss of precision when added to the result
I thought I'd try an extreme test of that claim, so I tried the
following example:
select ln(2.0^435411);
ln
-------------------------
301803.9070347863471187
The input to ln() in this case is truly huge (possibly larger than we
ought to allow), and results in pow_10 = 131068 in ln_var(). So we
compute ln(10) with 8 extra digits of precision, multiply that by
131068, effectively shifting it up by 6 decimal digits, leaving a
safety margin of 2 extra digits at the lower end, before we add that
to the result. And the above result is indeed correct according to bc
(just don't try to compute it using l(2^435411), or you'll be waiting
a long time :-).
That leaves me feeling pretty happy about that aspect of the
computation because in all realistic cases pow_10 ought to fall way
short of that, so there's a considerable margin of safety.
I'm much less happy with the sqrt() range reduction step though. I now
realise that the whole increment local_rscale before each sqrt()
approach is totally bogus. Like elsewhere in this patch, what we ought
to be doing is ensuring that the number of significant digits remains
fixed as the weight of x is reduced. Given the slow rate of increase
of logarithms as illustrated above, we only need to keep rscale + a
few significant digits during the sqrt() range reduction. I tried the
following:
/*
* Use sqrt() to reduce the input to the range 0.9 < x < 1.1.
*
* The final logarithm will have up to around rscale+6 significant digits.
* Each sqrt() will roughly halve the weight of x, so adjust the local
* rscale as we work so that we keep this many significant digits at each
* step (plus a few more for good measure).
*/
while (cmp_var(&x, &const_zero_point_nine) <= 0)
{
sqrt_rscale = rscale - x.weight * DEC_DIGITS / 2 + 8;
sqrt_var(&x, &x, sqrt_rscale);
mul_var(&fact, &const_two, &fact, 0);
}
while (cmp_var(&x, &const_one_point_one) >= 0)
{
sqrt_rscale = rscale - x.weight * DEC_DIGITS / 2 + 8;
sqrt_var(&x, &x, sqrt_rscale);
mul_var(&fact, &const_two, &fact, 0);
}
and it passed all the tests (including the extreme case above) with
the power-10 range reduction step disabled.
So repeated use of sqrt() can be used for range reduction, even in
extreme cases, and it won't lose precision if it's done right. In
fact, in the worst case there are only 22 sqrts() by my count.
Note that this also reduces the rscale used in the Taylor series,
since local_rscale is no longer being increased above its original
value of rscale+8. I think that's OK for largely the same reasons --
the result of the Taylor series is multiplied by a factor of at most
2^22 (and usually much less than that), so the 8 extra digits ought to
be sufficient, although that could be upped a bit if you really wanted
to be sure.
We might well want to keep the power-10 argument reduction step, but
it would now be there purely on performance grounds so there's scope
for playing around with the threshold at which it kicks in.
Regards,
Dean
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Dean Rasheed <dean.a.rasheed@gmail.com> writes:
I'm much less happy with the sqrt() range reduction step though. I now
realise that the whole increment local_rscale before each sqrt()
approach is totally bogus.
Yeah, I was wondering about that yesterday ...
So repeated use of sqrt() can be used for range reduction, even in
extreme cases, and it won't lose precision if it's done right. In
fact, in the worst case there are only 22 sqrts() by my count.
Cool.
We might well want to keep the power-10 argument reduction step, but
it would now be there purely on performance grounds so there's scope
for playing around with the threshold at which it kicks in.
My inclination is to get rid of it. I thought having ln_var recurse was
rather ugly, and I'm suspicious of whether there's really any performance
benefit. Even with the pow_10 reduction, you can have up to 7 sqrt steps
(if the first digit is 9999, or indeed anything above 445), and will
almost always have 4 or 5. So if the pow_10 reduction costs about as much
as 7 sqrt steps, the input has to exceed something like 1e170 before it
can hope to win. Admittedly there's daylight between there and 1e128000,
but I doubt it's worth carrying extra code for.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers