reducing NUMERIC size for 9.1

Started by Robert Haasalmost 16 years ago35 messageshackers
Jump to latest
#1Robert Haas
robertmhaas@gmail.com

EnterpriseDB asked me to develop the attached patch to reduce the
on-disk size of numeric and to submit it for inclusion in PG 9.1.
After searching the archives, I found a possible design for this by
Tom Lane based on an earlier proposal by Simon Riggs.

http://archives.postgresql.org/pgsql-hackers/2007-06/msg00715.php

The attached patch implements more or less the design described there,
and will essentially knock 2 bytes of the on-disk size of nearly all
numeric values anyone is likely to want to store, but without reducing
the overall range of the type; so, for people who are storing a lot of
numerics, it should save a great deal of storage space and, more
importantly, I/O. However, it does so in a way that should be
completely backward-compatible from a binary format standpoint, so
that pg_upgrade does not break.

I'm not entirely happy with the way I handled the variable-length
struct, although I don't think it's horrible, either. I'm willing to
rework it if someone has a better idea.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

Attachments:

numeric_2b.patchapplication/octet-stream; name=numeric_2b.patchDownload+124-36
#2Brendan Jurd
direvus@gmail.com
In reply to: Robert Haas (#1)
Re: reducing NUMERIC size for 9.1

On 10 July 2010 00:58, Robert Haas <robertmhaas@gmail.com> wrote:

EnterpriseDB asked me to develop the attached patch to reduce the
on-disk size of numeric and to submit it for inclusion in PG 9.1.
After searching the archives, I found a possible design for this by
Tom Lane based on an earlier proposal by Simon Riggs.

Hi Robert,

I'm reviewing this patch for the commitfest, and so far everything in
the patch looks good. Compile and regression tests worked fine.

However, I was trying to find a simple way to verify that it really
was reducing the on-disk size of compact numeric values and didn't get
the results I was expecting.

I dropped one thousand numerics with value zero into a table and
checked the on-disk size of the relation with your patch and on a
stock 8.4 instance. In both cases the result was exactly the same.

Shouldn't the table be smaller with your patch? Or is there something
wrong with my test?

CREATE TEMP TABLE numeric_short (a numeric);

INSERT INTO numeric_short (a)
SELECT 0::numeric FROM generate_series(1, 1000) i;

Regards,
BJ

#3Robert Haas
robertmhaas@gmail.com
In reply to: Brendan Jurd (#2)
Re: reducing NUMERIC size for 9.1

On Jul 15, 2010, at 11:58 AM, Brendan Jurd <direvus@gmail.com> wrote:

On 10 July 2010 00:58, Robert Haas <robertmhaas@gmail.com> wrote:

EnterpriseDB asked me to develop the attached patch to reduce the
on-disk size of numeric and to submit it for inclusion in PG 9.1.
After searching the archives, I found a possible design for this by
Tom Lane based on an earlier proposal by Simon Riggs.

Hi Robert,

I'm reviewing this patch for the commitfest, and so far everything in
the patch looks good. Compile and regression tests worked fine.

However, I was trying to find a simple way to verify that it really
was reducing the on-disk size of compact numeric values and didn't get
the results I was expecting.

I dropped one thousand numerics with value zero into a table and
checked the on-disk size of the relation with your patch and on a
stock 8.4 instance. In both cases the result was exactly the same.

Shouldn't the table be smaller with your patch? Or is there something
wrong with my test?

CREATE TEMP TABLE numeric_short (a numeric);

INSERT INTO numeric_short (a)
SELECT 0::numeric FROM generate_series(1, 1000) i;

Well, on that test, you'll save only 2000 bytes, which is less than a full block, so there's no guarantee the difference would be noticeable at the relation level. Scale it up by a factor of 10 and the difference should be measurable.

You might also look at testing with pg_column_size().

...Robert

#4Brendan Jurd
direvus@gmail.com
In reply to: Robert Haas (#3)
Re: reducing NUMERIC size for 9.1

On 16 July 2010 03:47, Robert Haas <robertmhaas@gmail.com> wrote:

On Jul 15, 2010, at 11:58 AM, Brendan Jurd <direvus@gmail.com> wrote:

I dropped one thousand numerics with value zero into a table and
checked the on-disk size of the relation with your patch and on a
stock 8.4 instance.  In both cases the result was exactly the same.

Shouldn't the table be smaller with your patch?  Or is there something
wrong with my test?

Well, on that test, you'll save only 2000 bytes, which is less than a full block, so there's no guarantee the difference would be noticeable at the relation level.  Scale it up by a factor of 10 and the difference should be measurable.

You might also look at testing with pg_column_size().

pg_column_size() did return the results I was expecting.
pg_column_size(0::numeric) is 8 bytes on 8.4 and it's 6 bytes on HEAD
with your patch.

However, even with 1 million rows of 0::numeric in my test table,
there was no difference at all in the on-disk relation size (36290560
with 36249600 in the table and 32768 in the fsm).

At this scale we should be seeing around 2 million bytes saved, but
instead the tables are identical. Is there some kind of disconnect in
how the new short numeric is making it to the disk, or perhaps another
effect interfering with my test?

Cheers,
BJ

#5Richard Huxton
dev@archonet.com
In reply to: Brendan Jurd (#4)
Re: reducing NUMERIC size for 9.1

On 16/07/10 13:44, Brendan Jurd wrote:

pg_column_size() did return the results I was expecting.
pg_column_size(0::numeric) is 8 bytes on 8.4 and it's 6 bytes on HEAD
with your patch.

At this scale we should be seeing around 2 million bytes saved, but
instead the tables are identical. Is there some kind of disconnect in
how the new short numeric is making it to the disk, or perhaps another
effect interfering with my test?

You've probably got rows being aligned to a 4-byte boundary. You're
probably not going to see any change unless you have a couple of 1-byte
columns that get placed after the numeric. If you went from 10 bytes
down to 8, that should be visible.

--
Richard Huxton
Archonet Ltd

#6Brendan Jurd
direvus@gmail.com
In reply to: Richard Huxton (#5)
Re: reducing NUMERIC size for 9.1

On 16 July 2010 22:51, Richard Huxton <dev@archonet.com> wrote:

On 16/07/10 13:44, Brendan Jurd wrote:>

At this scale we should be seeing around 2 million bytes saved, but
instead the tables are identical.  Is there some kind of disconnect in
how the new short numeric is making it to the disk, or perhaps another
effect interfering with my test?

You've probably got rows being aligned to a 4-byte boundary. You're probably
not going to see any change unless you have a couple of 1-byte columns that
get placed after the numeric. If you went from 10 bytes down to 8, that
should be visible.

Ah, thanks for the hint Richard. I didn't see any change with two
1-byte columns after the numeric, but with four such columns I did
finally see a difference.

Test script:

BEGIN;

CREATE TEMP TABLE foo (a numeric, b bool, c bool, d bool, e bool);

INSERT INTO foo (a, b, c, d, e)
SELECT 0::numeric, false, true, i % 2 = 0, i % 2 = 1
FROM generate_series(1, 1000000) i;

SELECT pg_total_relation_size('foo'::regclass);

ROLLBACK;

Results:

8.4: 44326912
HEAD with patch: 36290560

That settles my concern and I'm happy to pass this along to a commiter.

Cheers,
BJ

#7Thom Brown
thombrown@gmail.com
In reply to: Brendan Jurd (#6)
Re: reducing NUMERIC size for 9.1

On 16 July 2010 14:14, Brendan Jurd <direvus@gmail.com> wrote:

On 16 July 2010 22:51, Richard Huxton <dev@archonet.com> wrote:

On 16/07/10 13:44, Brendan Jurd wrote:>

At this scale we should be seeing around 2 million bytes saved, but
instead the tables are identical.  Is there some kind of disconnect in
how the new short numeric is making it to the disk, or perhaps another
effect interfering with my test?

You've probably got rows being aligned to a 4-byte boundary. You're probably
not going to see any change unless you have a couple of 1-byte columns that
get placed after the numeric. If you went from 10 bytes down to 8, that
should be visible.

Ah, thanks for the hint Richard.  I didn't see any change with two
1-byte columns after the numeric, but with four such columns I did
finally see a difference.

Test script:

BEGIN;

CREATE TEMP TABLE foo (a numeric, b bool, c bool, d bool, e bool);

INSERT INTO foo (a, b, c, d, e)
SELECT 0::numeric, false, true, i % 2 = 0, i % 2 = 1
FROM generate_series(1, 1000000) i;

SELECT pg_total_relation_size('foo'::regclass);

ROLLBACK;

Results:

8.4: 44326912
HEAD with patch: 36290560

That settles my concern and I'm happy to pass this along to a commiter.

Cheers,
BJ

Joy! :) Nice patch Robert.

Thom

#8David E. Wheeler
david@kineticode.com
In reply to: Thom Brown (#7)
Re: reducing NUMERIC size for 9.1

On Jul 16, 2010, at 6:17 AM, Thom Brown wrote:

Joy! :) Nice patch Robert.

Indeed.

What are the implications for pg_upgrade? Will a database with values created before the patch continue to work after the patch has been applied (as happened with the new hstore in 9.0), or will pg_upgrade need to be taught how to upgrade the old storage format?

Best,

David

#9Bruce Momjian
bruce@momjian.us
In reply to: David E. Wheeler (#8)
Re: reducing NUMERIC size for 9.1

David E. Wheeler wrote:

On Jul 16, 2010, at 6:17 AM, Thom Brown wrote:

Joy! :) Nice patch Robert.

Indeed.

What are the implications for pg_upgrade? Will a database with values
created before the patch continue to work after the patch has been
applied (as happened with the new hstore in 9.0), or will pg_upgrade
need to be taught how to upgrade the old storage format?

Robert told me the old format continues to work in the upgraded
databases.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ None of us is going to be here forever. +

#10David E. Wheeler
david@kineticode.com
In reply to: Bruce Momjian (#9)
Re: reducing NUMERIC size for 9.1

On Jul 16, 2010, at 9:04 AM, Bruce Momjian wrote:

What are the implications for pg_upgrade? Will a database with values
created before the patch continue to work after the patch has been
applied (as happened with the new hstore in 9.0), or will pg_upgrade
need to be taught how to upgrade the old storage format?

Robert told me the old format continues to work in the upgraded
databases.

Awesome. rhaas++

Best,

David

#11Hitoshi Harada
umi.tanuki@gmail.com
In reply to: Brendan Jurd (#4)
Re: reducing NUMERIC size for 9.1

2010/7/16 Brendan Jurd <direvus@gmail.com>:

On 16 July 2010 03:47, Robert Haas <robertmhaas@gmail.com> wrote:

You might also look at testing with pg_column_size().

pg_column_size() did return the results I was expecting.
pg_column_size(0::numeric) is 8 bytes on 8.4 and it's 6 bytes on HEAD
with your patch.

However, even with 1 million rows of 0::numeric in my test table,
there was no difference at all in the on-disk relation size (36290560
with 36249600 in the table and 32768 in the fsm).

At this scale we should be seeing around 2 million bytes saved, but
instead the tables are identical.  Is there some kind of disconnect in
how the new short numeric is making it to the disk, or perhaps another
effect interfering with my test?

What about large ARRAY of numeric type? Once upon a time I develop
tinyint for myself, the array size could get reduced.

Regards,

--
Hitoshi Harada

#12Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#1)
Re: reducing NUMERIC size for 9.1

Robert Haas <robertmhaas@gmail.com> writes:

I'm not entirely happy with the way I handled the variable-length
struct, although I don't think it's horrible, either. I'm willing to
rework it if someone has a better idea.

I don't like the way you did that either (specifically, not the kluge
in NUMERIC_DIGITS()). It would probably work better if you declared
two different structs, or a union of same, to represent the two layout
cases.

A couple of other thoughts:

n_sign_dscale is now pretty inappropriately named, probably better to
change the field name. This will also help to catch anything that's
not using the macros. (Renaming the n_weight field, or at least burying
it in an extra level of struct, would be helpful for the same reason.)

It seems like you've handled the NAN case a bit awkwardly. Since the
weight is uninteresting for a NAN, it's okay to not store the weight
field, so I think what you should do is consider that the dscale field
is still full-width, ie the format of the first word remains old-style
not new-style. I don't remember whether dscale is meaningful for a NAN,
but if it is, your approach is constraining what is possible to store,
and is also breaking compatibility with old databases.

Also, I wonder whether you can do anything with depending on the actual
bit values of the flag bits --- specifically, it's short header format
iff first bit is set. The NUMERIC_HEADER_SIZE macro in particular could
be made more efficient with that.

The sign extension code in the NUMERIC_WEIGHT() macro seems a bit
awkward; I wonder if there's a better way. One solution might be to
offset the value (ie, add or subtract NUMERIC_SHORT_WEIGHT_MIN) rather
than try to sign-extend per se.

Please do NOT commit this:

(errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),
! errmsg("value overflows numeric format %x w=%d s=%u",
! result->n_sign_dscale,
! NUMERIC_WEIGHT(result), NUMERIC_DSCALE(result))));

or at least hide it in "#ifdef DEBUG_NUMERIC" or some such.

Other than that the code changes look pretty clean, I'm mostly just
dissatisfied with the access macros.

regards, tom lane

#13Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#12)
Re: reducing NUMERIC size for 9.1

On Fri, Jul 16, 2010 at 2:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

I'm not entirely happy with the way I handled the variable-length
struct, although I don't think it's horrible, either. I'm willing to
rework it if someone has a better idea.

I don't like the way you did that either (specifically, not the kluge
in NUMERIC_DIGITS()).  It would probably work better if you declared
two different structs, or a union of same, to represent the two layout
cases.

A couple of other thoughts:

n_sign_dscale is now pretty inappropriately named, probably better to
change the field name.  This will also help to catch anything that's
not using the macros.  (Renaming the n_weight field, or at least burying
it in an extra level of struct, would be helpful for the same reason.)

I'm not sure what you have in mind here. If we create a union of two
structs, we'll still have to pick one of them to use to check the high
bits of the first word, so I'm not sure we'll be adding all that much
in terms of clarity. One possibility would be to name the fields
something like n_header1 and n_header2, or even just n_header[], but
I'm not sure if that's any better. If it is I'm happy to do it.

It seems like you've handled the NAN case a bit awkwardly.  Since the
weight is uninteresting for a NAN, it's okay to not store the weight
field, so I think what you should do is consider that the dscale field
is still full-width, ie the format of the first word remains old-style
not new-style.  I don't remember whether dscale is meaningful for a NAN,
but if it is, your approach is constraining what is possible to store,
and is also breaking compatibility with old databases.

There is only one NaN value. Neither weight or dscale is meaningful.
I think if the high two bits of the first word are 11 we never examine
anything else - do you see somewhere that we're doing otherwise?

Also, I wonder whether you can do anything with depending on the actual
bit values of the flag bits --- specifically, it's short header format
iff first bit is set.  The NUMERIC_HEADER_SIZE macro in particular could
be made more efficient with that.

Right, OK.

The sign extension code in the NUMERIC_WEIGHT() macro seems a bit
awkward; I wonder if there's a better way.  One solution might be to
offset the value (ie, add or subtract NUMERIC_SHORT_WEIGHT_MIN) rather
than try to sign-extend per se.

Hmm... so, if the weight is X we store the value
X-NUMERIC_SHORT_WEIGHT_MIN as an unsigned integer? That's kind of a
funny representation - I *think* it works out to sign extension with
the high bit flipped. I guess we could do it that way, but it might
make it harder/more confusing to do bit arithmetic with the weight
sign bit later on.

Please do NOT commit this:

                               (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),
!                                errmsg("value overflows numeric format %x w=%d s=%u",
!                                       result->n_sign_dscale,
!                                       NUMERIC_WEIGHT(result), NUMERIC_DSCALE(result))));

or at least hide it in "#ifdef DEBUG_NUMERIC" or some such.

Woopsie. That's debugging leftovers, sorry.

Other than that the code changes look pretty clean, I'm mostly just
dissatisfied with the access macros.

Thanks for the review.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

#14Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#13)
Re: reducing NUMERIC size for 9.1

[ gradually catching up on email ]

Robert Haas <robertmhaas@gmail.com> writes:

On Fri, Jul 16, 2010 at 2:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

I don't like the way you did that either (specifically, not the kluge
in NUMERIC_DIGITS()). �It would probably work better if you declared
two different structs, or a union of same, to represent the two layout
cases.

n_sign_dscale is now pretty inappropriately named, probably better to
change the field name. �This will also help to catch anything that's
not using the macros. �(Renaming the n_weight field, or at least burying
it in an extra level of struct, would be helpful for the same reason.)

I'm not sure what you have in mind here. If we create a union of two
structs, we'll still have to pick one of them to use to check the high
bits of the first word, so I'm not sure we'll be adding all that much
in terms of clarity.

No, you can do something like this:

typedef struct numeric_short
{
uint16 word1;
NumericDigit digits[1];
} numeric_short;

typedef struct numeric_long
{
uint16 word1;
int16 weight;
NumericDigit digits[1];
} numeric_long;

typedef union numeric
{
uint16 word1;
numeric_short short;
numeric_long long;
} numeric;

and then access word1 either directly or (after having identified which
format it is) via one of the sub-structs. If you really wanted to get
pedantic you could have a third sub-struct representing the format for
NaNs, but since those are just going to be word1 it may not be worth the
trouble.

It seems like you've handled the NAN case a bit awkwardly. �Since the
weight is uninteresting for a NAN, it's okay to not store the weight
field, so I think what you should do is consider that the dscale field
is still full-width, ie the format of the first word remains old-style
not new-style. �I don't remember whether dscale is meaningful for a NAN,
but if it is, your approach is constraining what is possible to store,
and is also breaking compatibility with old databases.

There is only one NaN value. Neither weight or dscale is meaningful.
I think if the high two bits of the first word are 11 we never examine
anything else - do you see somewhere that we're doing otherwise?

I hadn't actually looked. I think though that it's a mistake to break
compatibility on both dscale and weight when you only need to break one.
Also, weight is *certainly* uninteresting for NaNs since it's not even
meaningful unless there are digits. dscale could conceivably be worth
something.

The sign extension code in the NUMERIC_WEIGHT() macro seems a bit
awkward; I wonder if there's a better way. �One solution might be to
offset the value (ie, add or subtract NUMERIC_SHORT_WEIGHT_MIN) rather
than try to sign-extend per se.

Hmm... so, if the weight is X we store the value
X-NUMERIC_SHORT_WEIGHT_MIN as an unsigned integer? That's kind of a
funny representation - I *think* it works out to sign extension with
the high bit flipped. I guess we could do it that way, but it might
make it harder/more confusing to do bit arithmetic with the weight
sign bit later on.

Yeah, it was just an idea. It seems like there should be an easier way
to extract the sign-extended value, though.

regards, tom lane

#15Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#14)
Re: reducing NUMERIC size for 9.1

On Wed, Jul 28, 2010 at 3:00 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

On Fri, Jul 16, 2010 at 2:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

I don't like the way you did that either (specifically, not the kluge
in NUMERIC_DIGITS()).  It would probably work better if you declared
two different structs, or a union of same, to represent the two layout
cases.

n_sign_dscale is now pretty inappropriately named, probably better to
change the field name.  This will also help to catch anything that's
not using the macros.  (Renaming the n_weight field, or at least burying
it in an extra level of struct, would be helpful for the same reason.)

I'm not sure what you have in mind here.  If we create a union of two
structs, we'll still have to pick one of them to use to check the high
bits of the first word, so I'm not sure we'll be adding all that much
in terms of clarity.

No, you can do something like this:

typedef struct numeric_short
{
       uint16  word1;
       NumericDigit digits[1];
} numeric_short;

typedef struct numeric_long
{
       uint16  word1;
       int16   weight;
       NumericDigit digits[1];
} numeric_long;

typedef union numeric
{
       uint16  word1;
       numeric_short   short;
       numeric_long    long;
} numeric;

That doesn't quite work because there's also a varlena header that has
to be accounted for, so the third member of the union can't be a
simple uint16. I'm wondering if it makes sense to do something along
these lines:

typedef struct NumericData
{
int32 varlen;
int16 n_header;
union {
struct {
char n_data[1];
} short;
struct {
uint16 n_weight;
char n_data[1];
} long;
};
} NumericData;

Why n_data as char[1] instead of NumericDigit, you ask? It's that way
now, mostly I think so that the rest of the system isn't allowed to
know what underlying type is being used for NumericDigit; it looks
like previously it was signed char, but now it's int16.

It seems like you've handled the NAN case a bit awkwardly.  Since the
weight is uninteresting for a NAN, it's okay to not store the weight
field, so I think what you should do is consider that the dscale field
is still full-width, ie the format of the first word remains old-style
not new-style.  I don't remember whether dscale is meaningful for a NAN,
but if it is, your approach is constraining what is possible to store,
and is also breaking compatibility with old databases.

There is only one NaN value.  Neither weight or dscale is meaningful.
I think if the high two bits of the first word are 11 we never examine
anything else - do you see somewhere that we're doing otherwise?

I hadn't actually looked.  I think though that it's a mistake to break
compatibility on both dscale and weight when you only need to break one.
Also, weight is *certainly* uninteresting for NaNs since it's not even
meaningful unless there are digits.  dscale could conceivably be worth
something.

I don't think I'm breaking compatibility on anything. Can you clarify
what part of the code you're referring to here? I'm sort of lost.

The sign extension code in the NUMERIC_WEIGHT() macro seems a bit
awkward; I wonder if there's a better way.  One solution might be to
offset the value (ie, add or subtract NUMERIC_SHORT_WEIGHT_MIN) rather
than try to sign-extend per se.

Hmm... so, if the weight is X we store the value
X-NUMERIC_SHORT_WEIGHT_MIN as an unsigned integer?  That's kind of a
funny representation - I *think* it works out to sign extension with
the high bit flipped.  I guess we could do it that way, but it might
make it harder/more confusing to do bit arithmetic with the weight
sign bit later on.

Yeah, it was just an idea.  It seems like there should be an easier way
to extract the sign-extended value, though.

It seemed a bit awkward to me, too, but I'm not sure there's a better one.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

#16Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#15)
Re: reducing NUMERIC size for 9.1

Robert Haas <robertmhaas@gmail.com> writes:

On Wed, Jul 28, 2010 at 3:00 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

No, you can do something like this:

typedef union numeric
{
uint16 word1;
numeric_short short;
numeric_long long;
} numeric;

That doesn't quite work because there's also a varlena header that has
to be accounted for, so the third member of the union can't be a
simple uint16.

Yeah, you would need an additional layer of struct to represent the
numeric with a length word in front of it. I think this is not
necessarily bad because it would perhaps open the door to working
directly with short-varlena-header values, which is never going to
be possible with this:

typedef struct NumericData
{
int32 varlen;
int16 n_header;
union { ...

OTOH alignment considerations may make that idea hopeless anyway.

Why n_data as char[1] instead of NumericDigit, you ask?

Yes, we'd have to export NumericDigit if we wanted to declare these
structs "properly" in numeric.h. I wonder if that decision should
be revisited. I'd lean to making the whole struct local to numeric.c
though. Is there anyplace else that really ought to see it?

I hadn't actually looked. I think though that it's a mistake to break
compatibility on both dscale and weight when you only need to break one.
Also, weight is *certainly* uninteresting for NaNs since it's not even
meaningful unless there are digits. dscale could conceivably be worth
something.

I don't think I'm breaking compatibility on anything. Can you clarify
what part of the code you're referring to here? I'm sort of lost.

On-disk is what I'm thinking about. Right now, a NaN's first word is
all dscale except the sign bits. You're proposing to change that
but I think it's unnecessary to do so. If we do it the way I'm
thinking, dscale would still mean the same in a NaN, and we'd simply
be ignoring the weight field (which might or might not be there
physically).

regards, tom lane

#17Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#16)
Re: reducing NUMERIC size for 9.1

On Thu, Jul 29, 2010 at 1:20 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Yeah, you would need an additional layer of struct to represent the
numeric with a length word in front of it.  I think this is not
necessarily bad because it would perhaps open the door to working
directly with short-varlena-header values, which is never going to
be possible with this:

typedef struct NumericData
{
    int32           varlen;
    int16           n_header;
    union { ...

OTOH alignment considerations may make that idea hopeless anyway.

My understanding of our alignment rules for on-disk storage is still a
bit fuzzy, but as I understand it we don't align varlenas. So
presumably if we get a pointer directly into a disk block, the first
byte might happen to be not aligned, which would make the rest of the
structure aligned; and from playing around with the system, it looks
like if we get a value from anywhere else it's typically using the
4-byte varlena header. So it seems like it might be possible to write
code that aligns the data only if needed and otherwise skips a
palloc-and-copy cycle. I'm not totally sure that would be a win, but
it could be. Actually, I had a thought that it might be even more of
a win if you added a flag to the NumericVar representation indicating
whether the digit array was palloc'd or from the original tuple. Then
you might be able to avoid TWO palloc-and-copy cycles, although at the
price of a fairly significant code restructuring.

Which is a long-winded way of saying - it's probably not hopeless.

Why n_data as char[1] instead of NumericDigit, you ask?

Yes, we'd have to export NumericDigit if we wanted to declare these
structs "properly" in numeric.h.  I wonder if that decision should
be revisited.  I'd lean to making the whole struct local to numeric.c
though.  Is there anyplace else that really ought to see it?

Probably not. btree_gist is using it, but that's it, at least as far
as our tree is concerned. Attached please find a patch to make the
numeric representation private and add a convenience function
numeric_is_nan() for the benefit of btree_gist. If this looks sane,
I'll go ahead and commit it, which will simplify review of the main
patch once I rebase it over these changes.

I hadn't actually looked.  I think though that it's a mistake to break
compatibility on both dscale and weight when you only need to break one.
Also, weight is *certainly* uninteresting for NaNs since it's not even
meaningful unless there are digits.  dscale could conceivably be worth
something.

I don't think I'm breaking compatibility on anything.  Can you clarify
what part of the code you're referring to here?  I'm sort of lost.

On-disk is what I'm thinking about.  Right now, a NaN's first word is
all dscale except the sign bits.  You're proposing to change that
but I think it's unnecessary to do so.

*Where* am I proposing this? The new versions of NUMERIC_WEIGHT() and
NUMERIC_DSCALE() determine where to look for the bits in question
using NUMERIC_IS_SHORT(), which just tests NUMERIC_FLAGBITS(n) ==
NUMERIC_SHORT. There's nothing in there about the NaN case at all.
Even if there were, it's irrelevant because those bits are never
examined and, as far as I can tell, will always be zero barring a
cosmic ray hit. But even if they WERE examined, I don't see where I'm
changing the interpretation of them; in fact, I think I'm very
explicitly NOT doing that.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

Attachments:

make_numericdata_private.patchapplication/octet-stream; name=make_numericdata_private.patchDownload+47-35
#18Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#17)
Re: reducing NUMERIC size for 9.1

Robert Haas <robertmhaas@gmail.com> writes:

On Thu, Jul 29, 2010 at 1:20 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

On-disk is what I'm thinking about. �Right now, a NaN's first word is
all dscale except the sign bits. �You're proposing to change that
but I think it's unnecessary to do so.

*Where* am I proposing this?

Um, your patch has the comment

! * If the high bits of n_scale_dscale are NUMERIC_NAN, the two-byte header
! * format is also used, but the low bits of n_scale_dscale are discarded in
! * this case.

but now that I look a bit more closely, I don't think that's what the
code is doing. You've got the NUMERIC_DSCALE and NUMERIC_WEIGHT access
macros testing specifically for NUMERIC_IS_SHORT, not for high-bit-set
which I think is what I was assuming they'd do. So actually that code
is good as is: a NAN still has the old header format. It's just the
comment that's wrong.

regards, tom lane

#19Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#18)
Re: reducing NUMERIC size for 9.1

On Thu, Jul 29, 2010 at 4:37 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

On Thu, Jul 29, 2010 at 1:20 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

On-disk is what I'm thinking about.  Right now, a NaN's first word is
all dscale except the sign bits.  You're proposing to change that
but I think it's unnecessary to do so.

*Where* am I proposing this?

Um, your patch has the comment

!  * If the high bits of n_scale_dscale are NUMERIC_NAN, the two-byte header
!  * format is also used, but the low bits of n_scale_dscale are discarded in
!  * this case.

but now that I look a bit more closely, I don't think that's what the
code is doing.  You've got the NUMERIC_DSCALE and NUMERIC_WEIGHT access
macros testing specifically for NUMERIC_IS_SHORT, not for high-bit-set
which I think is what I was assuming they'd do.  So actually that code
is good as is: a NAN still has the old header format.  It's just the
comment that's wrong.

OK. I think you're misinterpreting the point of that comment, which
may mean that it needs some clarification. By "the two byte header
format is also used", I think I really meant "the header (and in fact
the entire value) is just 2 bytes". Really, the low order bits have
neither the old interpretation nor the new interpretation: they don't
have any interpretation at all - they're completely meaningless.
That's what the part after the word "but" was intended to clarify.
Every routine in numeric.c checks for NUMERIC_IS_NAN() and gives it
some special handling before doing anything else, so NUMERIC_WEIGHT()
and NUMERIC_DSCALE() are never called in that case.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

#20Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#19)
Re: reducing NUMERIC size for 9.1

Robert Haas <robertmhaas@gmail.com> writes:

OK. I think you're misinterpreting the point of that comment, which
may mean that it needs some clarification. By "the two byte header
format is also used", I think I really meant "the header (and in fact
the entire value) is just 2 bytes". Really, the low order bits have
neither the old interpretation nor the new interpretation: they don't
have any interpretation at all - they're completely meaningless.
That's what the part after the word "but" was intended to clarify.
Every routine in numeric.c checks for NUMERIC_IS_NAN() and gives it
some special handling before doing anything else, so NUMERIC_WEIGHT()
and NUMERIC_DSCALE() are never called in that case.

I would suggest the comment ought to read something like

NaN values also use a two-byte header (in fact, the
whole value is always only two bytes). The low order bits of
the header word are available to store dscale, though dscale
is not currently used with NaNs.

regards, tom lane

#21Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#20)
#22Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#21)
#23Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#22)
#24Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#23)
#25Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#24)
#26Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#25)
#27Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#26)
#28Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#27)
#29Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#27)
#30Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#28)
#31Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#30)
#32Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#31)
#33Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#32)
#34Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#33)
#35Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#34)