Selecting reals into doubles

Started by Will Newtonabout 22 years ago3 messagesgeneral
Jump to latest
#1Will Newton
will@gbdirect.co.uk

I have attached some SQL which produces what to me, at least, is
rather unexpected results. Selecting real columns into double
precision columns loses some precision. Is this expected or documented
anywhere?

Thanks,

Attachments:

precision.sqltext/plain; charset=us-asciiDownload
#2Bruno Wolff III
bruno@wolff.to
In reply to: Will Newton (#1)
Re: Selecting reals into doubles

On Wed, Mar 03, 2004 at 11:19:15 +0000,
Will Newton <will@gbdirect.co.uk> wrote:

I have attached some SQL which produces what to me, at least, is
rather unexpected results. Selecting real columns into double
precision columns loses some precision. Is this expected or documented
anywhere?

You left out the output. But probably what you are seeing are the effects
of increased precision not decreased precision. Neither of the two
numbers you entered is exactly representable as floating point numbers.
When being printed as single precision numbers you got the same thing
back as you entered because within the number of digits used to display
single precision numbers those are going to be the closest to what is
stored. This isn't going to be the case for double precision numbers
in general.

If you really want exact decimal fractions, you want to use the numeric type
to store them.

Show quoted text

Thanks,

DROP TABLE precision_test;
DROP TABLE precision_test2;

CREATE TABLE precision_test
(
foo real
);

INSERT INTO precision_test
SELECT 20.20
UNION SELECT 1969.22;

CREATE TABLE precision_test2
(
foo double precision
);

INSERT INTO precision_test2 (foo) SELECT foo from precision_test;

SELECT * FROM precision_test;
SELECT * FROM precision_test2;

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: Will Newton (#1)
Re: Selecting reals into doubles

Will Newton <will@gbdirect.co.uk> writes:

I have attached some SQL which produces what to me, at least, is
rather unexpected results. Selecting real columns into double
precision columns loses some precision. Is this expected or documented
anywhere?

You shouldn't be surprised; this is a fundamental behavior of floating
point arithmetic anywhere.

There isn't any "loss of precision" per se --- the value represented in
the float8 column is the same as what was in the float4 column. The
difference is that the float8 output routine is programmed to print
about 15 digits of precision whereas the float4 routine prints no more
than 6. So you get to see the fact that the stored value wasn't really
20.2 but only something close to it.

If you find this surprising maybe you should be using type "numeric".

regards, tom lane