numeric precision when raising one numeric to another.
It appears from checking the output of exponentiation of one numeric to
another, the output is actually in floating point. Is this normal and /
or expected?
Now, given that
create table test2 (i1 numeric(20,0), i2 numeric(20,0));
insert into test values (123456789012345,123456789012345);
select i1*i2 from test2;
gives:
?column?
-------------------------------
15241578753238669120562399025
it seems odd that
create table test (i1 numeric(20,0), i2 numeric(20,0));
insert into test values (2,55);
select i1^i2 from test;
gives:
?column?
---------------------
3.6028797018964e+16
Now, I can get an exact answer if I'm willing to twiddle with breaking
the exponent down:
select (2^60)::numeric;
Gives:
numeric
---------------------
1152921504606850000
While, select (2^30)::numeric*(2^30)::numeric;
Gives:
?column?
---------------------
1152921504606846976
So, numeric can hold the value, but it looks like the exponent math is
converting it to float.
I'm not bothered too much by it, as I don't really work with numbers
that big. I was mainly wondering if this is kosher is all.
Scott Marlowe <smarlowe@g2switchworks.com> writes:
It appears from checking the output of exponentiation of one numeric to
another, the output is actually in floating point. Is this normal and /
or expected?
Yes, seeing that the only ^ operator we have is float8.
regression=# \do ^
List of operators
Schema | Name | Left arg type | Right arg type | Result type |
Description
------------+------+------------------+------------------+------------------+----------------------
pg_catalog | ^ | double precision | double precision | double precision | exponentiation (x^y)
(1 row)
regards, tom lane
On Wed, 2005-05-18 at 16:42, Tom Lane wrote:
Scott Marlowe <smarlowe@g2switchworks.com> writes:
It appears from checking the output of exponentiation of one numeric to
another, the output is actually in floating point. Is this normal and /
or expected?Yes, seeing that the only ^ operator we have is float8.
regression=# \do ^
List of operators
Schema | Name | Left arg type | Right arg type | Result type |
Description
------------+------+------------------+------------------+------------------+----------------------
pg_catalog | ^ | double precision | double precision | double precision | exponentiation (x^y)
(1 row)
But is this proper behaviour?
Considering that the SQL spec says the result of multiplication of exact
numeric types is exact numeric types of precision S1+S2, and
exponentiation is nothing more than repeated multiplication, should
postgresql have a numeric capable exponentiation operator? Since I've
finally got a job where I can actually hack on the clock a bit, it might
be a nice trial balloon. It'll take a week or two to knock the rust off
my C skills though. :)
Scott Marlowe <smarlowe@g2switchworks.com> writes:
Considering that the SQL spec says the result of multiplication of exact
numeric types is exact numeric types of precision S1+S2, and
exponentiation is nothing more than repeated multiplication,
... not when the exponent is non-integral.
regards, tom lane
Considering that the SQL spec says the result of multiplication of
exact
numeric types is exact numeric types of precision S1+S2, and
exponentiation is nothing more than repeated multiplication,... not when the exponent is non-integral.
For one thing. For another, I believe the standard C library only has
floating point exponentiation functions, not that there aren't plenty
of numeric libraries with integral ones. Finally, exponentiated
numbers get real big, real fast, and the floating point types can hold
much larger magnitudes than the integer types, albeit inexactly. For
example, on the Mac I'm using now, long long ints max out at about
10^19, while long doubles can represent 10^308.
- John Burger
MITRE
On Wed, May 18, 2005 at 10:46:50PM -0400, John Burger wrote:
Considering that the SQL spec says the result of multiplication of
exact
numeric types is exact numeric types of precision S1+S2, and
exponentiation is nothing more than repeated multiplication,... not when the exponent is non-integral.
For one thing. For another, I believe the standard C library only has
floating point exponentiation functions, not that there aren't plenty
of numeric libraries with integral ones. Finally, exponentiated
numbers get real big, real fast, and the floating point types can hold
much larger magnitudes than the integer types, albeit inexactly. For
example, on the Mac I'm using now, long long ints max out at about
10^19, while long doubles can represent 10^308.
Well, we already have an interesting library of mathematical functions
for NUMERIC (which is an arbitrary precision type, so it wouldn't matter
how big the result would get). I think the only reason we don't have a
NUMERIC exponentiation function is that nobody has implemented it.
--
Alvaro Herrera (<alvherre[a]surnet.cl>)
"People get annoyed when you try to debug them." (Larry Wall)
On Wed, May 18, 2005 at 11:32:40PM -0400, Alvaro Herrera wrote:
On Wed, May 18, 2005 at 10:46:50PM -0400, John Burger wrote:
For one thing. For another, I believe the standard C library only has
floating point exponentiation functions, not that there aren't plenty
of numeric libraries with integral ones. Finally, exponentiated
numbers get real big, real fast, and the floating point types can hold
much larger magnitudes than the integer types, albeit inexactly. For
example, on the Mac I'm using now, long long ints max out at about
10^19, while long doubles can represent 10^308.Well, we already have an interesting library of mathematical functions
for NUMERIC (which is an arbitrary precision type, so it wouldn't matter
how big the result would get). I think the only reason we don't have a
NUMERIC exponentiation function is that nobody has implemented it.
The prerequisites for such a function would be a log() and exp()
function for numeric. And the real question there would be, what's a
sufficient accuracy? Numbers people actually use rarely have even
rational logarithms, so there is no way to store them 100% accurate.
As long as you're using integral exponents you can get away with
multiplication. BTW, the commandline utility "bc" has arbitrary number
arithmatic, maybe we can see how they do it? It defaults to 20 digits
precision, which is obviously not enough for large exponents.
Hmm, it looks like even they don't support raising to fractional
powers. When calculating 2^100, you need a precision of at least 35
decimal places to get in the ballpark of the correct figure using
log/exp, 30 isn't enough. Maybe do exact for integer exponents and
approx for non-integer?
kleptog@vali:~$ bc -l
bc 1.06
Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc.
2^100
1267650600228229401496703205376
2^100.1
Runtime warning (func=(main), adr=11): non-zero scale in exponent
1267650600228229401496703205376
e(l(2)*100)
1267650600228229400579922894637.90158245154400629512
scale=30
e(l(2)*100)
1267650600228229401496703205353.617337311111135194699059124092
scale=35
e(l(2)*100)
1267650600228229401496703205375.99897630874075350752485091801369515
Hope this helps,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
Show quoted text
Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
tool for doing 5% of the work and then sitting around waiting for someone
else to do the other 95% so you can sue them.
PostgreSQL has a numeric exp() function and a numeric ln() function, so
a numeric pow() function is trivial.
pow(A,z) = exp(z*ln(A))
Probably, it could be made a bit more efficient if specially tuned so as
to not require these functions.
Newton's method (or something of that nature) could obviously be used to
write a more generic version. The double C function can provide the
starting estimate.
-----Original Message-----
From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-
owner@postgresql.org] On Behalf Of Alvaro Herrera
Sent: Wednesday, May 18, 2005 8:33 PM
To: John Burger
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] numeric precision when raising one numeric to
another.On Wed, May 18, 2005 at 10:46:50PM -0400, John Burger wrote:
Considering that the SQL spec says the result of multiplication of
exact
numeric types is exact numeric types of precision S1+S2, and
exponentiation is nothing more than repeated multiplication,... not when the exponent is non-integral.
For one thing. For another, I believe the standard C library only
has
floating point exponentiation functions, not that there aren't
plenty
of numeric libraries with integral ones. Finally, exponentiated
numbers get real big, real fast, and the floating point types can
hold
much larger magnitudes than the integer types, albeit inexactly.
For
example, on the Mac I'm using now, long long ints max out at about
10^19, while long doubles can represent 10^308.Well, we already have an interesting library of mathematical functions
for NUMERIC (which is an arbitrary precision type, so it wouldn't
matter
how big the result would get). I think the only reason we don't have
a
NUMERIC exponentiation function is that nobody has implemented it.
--
Alvaro Herrera (<alvherre[a]surnet.cl>)
"People get annoyed when you try to debug them." (Larry Wall)---------------------------(end of
broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to
majordomo@postgresql.org
Import Notes
Resolved by subject fallback
We use Moshier's excellent qfloat numbers.
http://www.moshier.net/qlib.zip
Documentation:
http://www.moshier.net/qlibdoc.html
So, if you do the following query using CONNX:
select convert(pow(9.5,5.9), varchar)
You will get:
586906.97548405202106027547827738573075504470845684721318303336760202394
5916438064873363100477233500417619
select pow(9.5,5.9)
will return
586906.975484052
Since we bind to double by default.
Correct answer is (1000+ digits correct):
586906.97548405202106027547827738573075504470845684721318303336760202394
591643806487336310047723350041762446340060298807517843626920535883745120
986264188881010308125070048988991029963307831015812131852033741567043945
026243178422915290830477381800527219457732229115168020868495354958648414
971711685840852684310130094029132142016389076807514261122763703528030232
527888410105794936941873557344173381053429729906642653004811669321631656
412265025095200907690509153627646726650174318576911125609483654656735531
730688699016039020145753010069585349923506043259767525488453544723589880
427675085429230106535405724821481118286775763085905255396545439080913364
233329975992733986721408870779427889446166143315004295671202526112889352
043403059958082573333911277403826735005243749050919501832287479909523379
145261282152034011112442260653013983173651648948479379642961647792197822
118268619926636309476522424825736766449170308662847527591516245860159270
335785812239686778074630519049627528571047048724459826189283691382474184
22032503387712889
It might seem like overkill, but (for instance) we have customers who
measure every toll on toll roads for large states in the eastern US.
If they want to calculate 5 years of interest on the current balance,
accurate to the penny, at small interest rates, such precision is very
helpful.
His (Moshier's) math stuff is really top-notch.
-----Original Message-----
From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-
owner@postgresql.org] On Behalf Of Martijn van Oosterhout
Sent: Thursday, May 19, 2005 2:14 AM
To: Alvaro Herrera
Cc: John Burger; pgsql-general@postgresql.org
Subject: Re: [GENERAL] numeric precision when raising one numeric to
another.On Wed, May 18, 2005 at 11:32:40PM -0400, Alvaro Herrera wrote:
On Wed, May 18, 2005 at 10:46:50PM -0400, John Burger wrote:
For one thing. For another, I believe the standard C library only
has
floating point exponentiation functions, not that there aren't
plenty
of numeric libraries with integral ones. Finally, exponentiated
numbers get real big, real fast, and the floating point types can
hold
much larger magnitudes than the integer types, albeit inexactly.
For
example, on the Mac I'm using now, long long ints max out at about
10^19, while long doubles can represent 10^308.Well, we already have an interesting library of mathematical
functions
for NUMERIC (which is an arbitrary precision type, so it wouldn't
matter
how big the result would get). I think the only reason we don't
have a
NUMERIC exponentiation function is that nobody has implemented it.
The prerequisites for such a function would be a log() and exp()
function for numeric. And the real question there would be, what's a
sufficient accuracy? Numbers people actually use rarely have even
rational logarithms, so there is no way to store them 100% accurate.As long as you're using integral exponents you can get away with
multiplication. BTW, the commandline utility "bc" has arbitrary number
arithmatic, maybe we can see how they do it? It defaults to 20 digits
precision, which is obviously not enough for large exponents.Hmm, it looks like even they don't support raising to fractional
powers. When calculating 2^100, you need a precision of at least 35
decimal places to get in the ballpark of the correct figure using
log/exp, 30 isn't enough. Maybe do exact for integer exponents and
approx for non-integer?kleptog@vali:~$ bc -l
bc 1.06
Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc.2^100
1267650600228229401496703205376
2^100.1
Runtime warning (func=(main), adr=11): non-zero scale in exponent
1267650600228229401496703205376e(l(2)*100)
1267650600228229400579922894637.90158245154400629512
scale=30
e(l(2)*100)1267650600228229401496703205353.617337311111135194699059124092
scale=35
e(l(2)*100)1267650600228229401496703205375.99897630874075350752485091801369515
Hope this helps,
--
Martijn van Oosterhout <kleptog@svana.org>
Patent. n. Genius is 5% inspiration and 95% perspiration. A patent
is a
Show quoted text
tool for doing 5% of the work and then sitting around waiting for
someone
else to do the other 95% so you can sue them.
Import Notes
Resolved by subject fallback
On Thu, May 19, 2005 at 10:41:51AM -0700, Dann Corbit wrote:
We use Moshier's excellent qfloat numbers.
http://www.moshier.net/qlib.zip
Documentation:
http://www.moshier.net/qlibdoc.htmlSo, if you do the following query using CONNX:
select convert(pow(9.5,5.9), varchar)
You will get:
586906.97548405202106027547827738573075504470845684721318303336760202394
5916438064873363100477233500417619
But it's not accurate enough with the default settings. For example
2^100:
# select exp( ln(2::numeric) * 100 );
exp
--------------------------------------------------
1267650600228229400579922894637.9015824515440063
(1 row)
The answer should be:
1267650600228229401496703205376
So it's wrong from the 14th digit onwards. If that's the case you may
as well stick to using floating point. It does however appear you can
influence the precision, See:
# select exp( ln(2::numeric(50,30)) * 100 );
exp
----------------------------------------------------------------
1267650600228229401496703205375.991370405139384131115870698781
(1 row)
Using numeric(50,25) gets you only 28 correct digits. So, if you know
how big your result is going to be you can adjust the types to match
and get whatever precision you want. Given that you can estimate the
number of digits easily enough (it's linear with the value before the
exp()) maybe you can get it to automatically choose the right
precision?
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
Show quoted text
Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
tool for doing 5% of the work and then sitting around waiting for someone
else to do the other 95% so you can sue them.
If you want to create a pow() function for numeric using existing
numeric functions, it [the new function] should be aware of the
precision of the inputs, and the precision of the output should be their
product.
So, if you do pow(numeric(10,5), numeric(10,5)) then the result column
should be numeric(100,25) if you want to retain full precision.
-----Original Message-----
From: Martijn van Oosterhout [mailto:kleptog@svana.org]
Sent: Thursday, May 19, 2005 2:02 PM
To: Dann Corbit
Cc: Alvaro Herrera; John Burger; pgsql-general@postgresql.org
Subject: Re: [GENERAL] numeric precision when raising one numeric to
another.On Thu, May 19, 2005 at 10:41:51AM -0700, Dann Corbit wrote:
We use Moshier's excellent qfloat numbers.
http://www.moshier.net/qlib.zip
Documentation:
http://www.moshier.net/qlibdoc.htmlSo, if you do the following query using CONNX:
select convert(pow(9.5,5.9), varchar)
You will get:
586906.97548405202106027547827738573075504470845684721318303336760202394
5916438064873363100477233500417619
But it's not accurate enough with the default settings. For example
2^100:# select exp( ln(2::numeric) * 100 );
exp
--------------------------------------------------
1267650600228229400579922894637.9015824515440063
(1 row)The answer should be:
1267650600228229401496703205376So it's wrong from the 14th digit onwards. If that's the case you may
as well stick to using floating point. It does however appear you can
influence the precision, See:# select exp( ln(2::numeric(50,30)) * 100 );
exp
----------------------------------------------------------------
1267650600228229401496703205375.991370405139384131115870698781
(1 row)Using numeric(50,25) gets you only 28 correct digits. So, if you know
how big your result is going to be you can adjust the types to match
and get whatever precision you want. Given that you can estimate the
number of digits easily enough (it's linear with the value before the
exp()) maybe you can get it to automatically choose the right
precision?
--
Martijn van Oosterhout <kleptog@svana.org>
Patent. n. Genius is 5% inspiration and 95% perspiration. A patent
is a
Show quoted text
tool for doing 5% of the work and then sitting around waiting for
someone
else to do the other 95% so you can sue them.
Import Notes
Resolved by subject fallback
Hmmm....
I underestimated.
pow(99999.99999,99999.99999) =
9.998748785736894607828527462269893046126336085
91664915498635306081273911645075964079222720857427
35641018572673827935330501923067157794798212338823
24997145234949798725508071849154834025252682619864
09675931105114160107573542813573334036043627693673
32584230414090115274301822704676399594689777183090
95124350838052746795283582659784697437868624515447
84308955024802754764364277858847454870139679632204
93566098207186651878539285222697852739872657689082
77740528466769263852694444704577829403518386946691
11157539964528436618742040945886361696712501785143
49612003446329175703756667138162553151705912580792
12331560317684418171064195077598932031644579554853
98595138860229023469055949001949521877405516916475
97554564462253024119778312344592336542732038212175
43130812948451126588746192211036266786198594583755
89036373827433475892132965189682874790600247279436
07120265912512012429492123644988164587146533255393
93335345599658088256314460922495519381049143246081
37075434256493449284197921246089978660147299071527
8174795070535064342859550611e499999
So the precision calculation would be much more complicated.
-----Original Message-----
From: Dann Corbit
Sent: Thursday, May 19, 2005 2:20 PM
To: 'Martijn van Oosterhout'
Cc: Alvaro Herrera; John Burger; pgsql-general@postgresql.org
Subject: RE: [GENERAL] numeric precision when raising one numeric to
another.If you want to create a pow() function for numeric using existing
numeric
functions, it [the new function] should be aware of the precision of
the
inputs, and the precision of the output should be their product.
So, if you do pow(numeric(10,5), numeric(10,5)) then the result column
should be numeric(100,25) if you want to retain full precision.-----Original Message-----
From: Martijn van Oosterhout [mailto:kleptog@svana.org]
Sent: Thursday, May 19, 2005 2:02 PM
To: Dann Corbit
Cc: Alvaro Herrera; John Burger; pgsql-general@postgresql.org
Subject: Re: [GENERAL] numeric precision when raising one numeric to
another.On Thu, May 19, 2005 at 10:41:51AM -0700, Dann Corbit wrote:
We use Moshier's excellent qfloat numbers.
http://www.moshier.net/qlib.zip
Documentation:
http://www.moshier.net/qlibdoc.htmlSo, if you do the following query using CONNX:
select convert(pow(9.5,5.9), varchar)
You will get:
586906.97548405202106027547827738573075504470845684721318303336760202394
5916438064873363100477233500417619
But it's not accurate enough with the default settings. For example
2^100:# select exp( ln(2::numeric) * 100 );
exp
--------------------------------------------------
1267650600228229400579922894637.9015824515440063
(1 row)The answer should be:
1267650600228229401496703205376So it's wrong from the 14th digit onwards. If that's the case you
may
as well stick to using floating point. It does however appear you
can
influence the precision, See:
# select exp( ln(2::numeric(50,30)) * 100 );
exp
----------------------------------------------------------------
1267650600228229401496703205375.991370405139384131115870698781
(1 row)Using numeric(50,25) gets you only 28 correct digits. So, if you
know
how big your result is going to be you can adjust the types to match
and get whatever precision you want. Given that you can estimate the
number of digits easily enough (it's linear with the value before
the
exp()) maybe you can get it to automatically choose the right
precision?
--
Martijn van Oosterhout <kleptog@svana.org>
Patent. n. Genius is 5% inspiration and 95% perspiration. A patent
is
Show quoted text
a
tool for doing 5% of the work and then sitting around waiting for
someone
else to do the other 95% so you can sue them.
Import Notes
Resolved by subject fallback
On Thu, May 19, 2005 at 02:25:58PM -0700, Dann Corbit wrote:
Hmmm....
I underestimated.pow(99999.99999,99999.99999) =
Yeah, a number with x digits raised to the power with something y digits
long could have a length approximating:
x * (10^y) digits
So two numbers both 4 digits long can have a result of upto 40,000
digits. You're only going to be able to them represent exactly for
cases where y is small and integer.
What's a meaningful limit? Do we simply say, you get upto 100 digits
and that's it? Or an extra parameter so you can specify directly?
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
Show quoted text
Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
tool for doing 5% of the work and then sitting around waiting for someone
else to do the other 95% so you can sue them.
Probably, the important meaningful cases are ones that have small
exponents (HOPEFULLY less than 25) used in interest calculations.
Million digit numbers are really only interesting in the field of pure
mathematics, since the number of elementary particles in the universe is
well under a googol (10^100).
But if someone has a billion dollars (and some do, of course -- even
potentially trillions if it is a government) and they want to do a long
term interest calculation accurate to the penny, then we should be
careful to get that answer right.
The calculation pow(huge,huge) will result in a big pile of fascinating
digits that won't really have much physical meaning.
-----Original Message-----
From: Martijn van Oosterhout [mailto:kleptog@svana.org]
Sent: Thursday, May 19, 2005 2:48 PM
To: Dann Corbit
Cc: Alvaro Herrera; John Burger; pgsql-general@postgresql.org
Subject: Re: [GENERAL] numeric precision when raising one numeric to
another.On Thu, May 19, 2005 at 02:25:58PM -0700, Dann Corbit wrote:
Hmmm....
I underestimated.pow(99999.99999,99999.99999) =
Yeah, a number with x digits raised to the power with something y
digits
long could have a length approximating:
x * (10^y) digits
So two numbers both 4 digits long can have a result of upto 40,000
digits. You're only going to be able to them represent exactly for
cases where y is small and integer.What's a meaningful limit? Do we simply say, you get upto 100 digits
and that's it? Or an extra parameter so you can specify directly?
--
Martijn van Oosterhout <kleptog@svana.org>
Patent. n. Genius is 5% inspiration and 95% perspiration. A patent
is a
Show quoted text
tool for doing 5% of the work and then sitting around waiting for
someone
else to do the other 95% so you can sue them.
Import Notes
Resolved by subject fallback
At CONNX, we just do 100 digits using qfloat (about 104 actually).
Internally, all math is done using this type. Then we convert to the
smaller types [or character types] as requested.
I don't think that there is any business need for more than that.
A package like Maple might need to worry about it, or a theoretical
mathematician looking for patterns in digits or something like that.
But you can't please everybody.
-----Original Message-----
From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-
owner@postgresql.org] On Behalf Of Martijn van Oosterhout
Sent: Thursday, May 19, 2005 2:48 PM
To: Dann Corbit
Cc: Alvaro Herrera; John Burger; pgsql-general@postgresql.org
Subject: Re: [GENERAL] numeric precision when raising one numeric to
another.On Thu, May 19, 2005 at 02:25:58PM -0700, Dann Corbit wrote:
Hmmm....
I underestimated.pow(99999.99999,99999.99999) =
Yeah, a number with x digits raised to the power with something y
digits
long could have a length approximating:
x * (10^y) digits
So two numbers both 4 digits long can have a result of upto 40,000
digits. You're only going to be able to them represent exactly for
cases where y is small and integer.What's a meaningful limit? Do we simply say, you get upto 100 digits
and that's it? Or an extra parameter so you can specify directly?
--
Martijn van Oosterhout <kleptog@svana.org>
Patent. n. Genius is 5% inspiration and 95% perspiration. A patent
is a
Show quoted text
tool for doing 5% of the work and then sitting around waiting for
someone
else to do the other 95% so you can sue them.
Import Notes
Resolved by subject fallback
"Dann Corbit" <DCorbit@connx.com> writes:
Probably, the important meaningful cases are ones that have small
exponents (HOPEFULLY less than 25) used in interest calculations.
No, even in interest calculation floating point arithmetic is perfectly fine.
You do your floating point arithmetic to calculate the factor to use when
multiplying your fixed precision exact dollar amounts. You then store the
result again in exact form and do your account balancing in fixed precision
arithmetic to be sure you don't lose a penny here or there.
In fact the exponent can be much larger than 25 (think of monthly compounded
25 year mortgages, or worse, daily compounded savings accounts). But in those
cases the base will be very close to 1.
There's really no use case for NUMERIC^NUMERIC except in the case of an
integral power which is useful for number theory and cryptography.
--
greg
I find all these statements about the near-uselessness of
NUMERIC^NUMERIC to be pretty amazing. It's fine to say, "no one seems
to be asking for this, so we haven't implemented it yet", but, c'mon,
folks, Postgres gets used for more than "business cases".
- John D. Burger
MITRE
On Fri, May 20, 2005 at 08:19:58 -0400,
"John D. Burger" <john@mitre.org> wrote:
I find all these statements about the near-uselessness of
NUMERIC^NUMERIC to be pretty amazing. It's fine to say, "no one seems
to be asking for this, so we haven't implemented it yet", but, c'mon,
folks, Postgres gets used for more than "business cases".
It is pretty useless. If you are doing exact math, fractional exponents
don't fit. If you are using integer exponents, you can store usable
exponents in an int (arguably an an int2).
People may be interested in NUMERIC^NUMERIC MOD N, but if so they aren't
going to do the exponentation first and then the mod operation.
On Fri, 20 May 2005, John D. Burger wrote:
I find all these statements about the near-uselessness of
NUMERIC^NUMERIC to be pretty amazing. It's fine to say, "no one seems
to be asking for this, so we haven't implemented it yet", but, c'mon,
folks, Postgres gets used for more than "business cases".
If people don't see the use of a function they aren't going to implement
it. In addition, there is a small, but non-zero cost to adding a
function/operator to the system (in the cost to maintain it at the very
least) and if the general belief is that the function or operator is
useless or nearly useless then it simply may not be worth adding.
On Fri, 2005-05-20 at 09:06, Stephan Szabo wrote:
On Fri, 20 May 2005, John D. Burger wrote:
I find all these statements about the near-uselessness of
NUMERIC^NUMERIC to be pretty amazing. It's fine to say, "no one seems
to be asking for this, so we haven't implemented it yet", but, c'mon,
folks, Postgres gets used for more than "business cases".If people don't see the use of a function they aren't going to implement
it. In addition, there is a small, but non-zero cost to adding a
function/operator to the system (in the cost to maintain it at the very
least) and if the general belief is that the function or operator is
useless or nearly useless then it simply may not be worth adding.
A couple of points.
1: How much time has been expended in the last 5 or so years
"maintaining" the floating point exponentiation operator? Seriously. I
doubt any work has gone into maintaining it. I don't mean bug fixes. I
mean touching its code because something else changed, and therefore the
fp exponent code was affected. If someone has had to do something to
maintain it, I'd certainly welcome hearing from them. My guess is that
the total amount of time that's gone into maintaining the FP version of
this operator is zero, or nearly so, and, if implemented, the amount of
time that will go into maintaining will be the same, zero, or, very
nearly so.
I could be wrong, and would be unoffended to be proven so, but I don't
think I am. I think that argument is just hand waving.
2: How many people who DO work with large exponents and need arbitrary
precision have looked at postgresql, typed in "select 3^100" got back
5.15377520732011e+47, and simply went to another piece of software and
never looked back? We don't know. And the attitude that it seems
useless to me so it must be useless to everybody else isn't going to
help attract people who do things that seem esoteric and strange to you,
but are important to them.
3: Is this worth submitting a patch for? I don't want to spend x hours
making a patch and 10x hours arguing over getting it accepted... :)