Since '2001-09-09 01:46:40'::timestamp microseconds are lost when extracting epoch
Hello,
Steps to reproduce:
select extract(epoch from '2001-09-09 01:46:39.999999'::timestamp)
returns 999999999.999999 as expected
while
select extract(epoch from '2001-09-09 01:46:40.000021'::timestamp)
returns 1000000000.00002 - 1 microsecond is truncated.
Obviously, it is due to the fact that extract epoch returns double
precision which in turn has 15 decimal digits precision.
While there is a pretty simple workaround in C, that returns
microseconds since Unix epoch:
Datum
to_microseconds(PG_FUNCTION_ARGS) {
Timestamp arg = PG_GETARG_TIMESTAMP(0)+946684800000000;
PG_RETURN_INT64(arg);
}
I was not able to find the other way of doing that (i.e. without C
function).
Petr Fedorov <petr.fedorov@phystech.edu> writes:
select extract(epoch from '2001-09-09 01:46:40.000021'::timestamp)
returns 1000000000.00002 - 1 microsecond is truncated.
Obviously, it is due to the fact that extract epoch returns double
precision which in turn has 15 decimal digits precision.
I can't get very excited about this. However, it might be worth
noting that v12 and HEAD print "1000000000.000021" as expected,
thanks to the Ryu float output code. You can get that from older
branches as well if you set extra_float_digits = 1.
By my arithmetic, IEEE float8 ought to be able to represent
microseconds accurately out to about 285 years either way from the
1970 epoch, so for practical purposes it'll be fine for a long time.
regards, tom lane
On Sat, Nov 30, 2019 at 10:28 PM Petr Fedorov <petr.fedorov@phystech.edu> wrote:
Obviously, it is due to the fact that extract epoch returns double
precision which in turn has 15 decimal digits precision.
I guess this deviation from the SQL standard ("exact numeric") made
sense when PostgreSQL used double for timestamps, but would break a
lot of queries if we changed it.
Thomas Munro <thomas.munro@gmail.com> writes:
On Sat, Nov 30, 2019 at 10:28 PM Petr Fedorov <petr.fedorov@phystech.edu> wrote:
Obviously, it is due to the fact that extract epoch returns double
precision which in turn has 15 decimal digits precision.
I guess this deviation from the SQL standard ("exact numeric") made
sense when PostgreSQL used double for timestamps, but would break a
lot of queries if we changed it.
Hmmm ... well, now that you mention it, would it really break things
if we made it return numeric? There's an implicit cast to float8,
so it seems like queries requiring that type would still work.
There might be a performance-related argument against switching,
perhaps.
regards, tom lane
It appears that extract epoch returns double precision, not float8. And
the program below seems to be demonstrating that there are enough
'floating-point numbers' as defined by IEEE-754 to represent
1000000000.000021 precisely enough:
#include <cmath>
#include <iostream>
#include <iomanip>
#include <limits>
int main() {
double from = 1000000000.000020;
std::cout << std::setprecision(56) << from << " (" << std::hexfloat <<
from << ") " << std::endl;
for(auto i = 0; i < 15; ++i) {
double to = std::nextafter( from, std::numeric_limits<double>::max());
std::cout << std::defaultfloat << to << std::hexfloat << " (" << to
<< ") " << std::endl;
from = to;
}
}
Outputs:
1000000000.00002002716064453125 (0x1.dcd65000000a8p+29)
1000000000.00002014636993408203125 (0x1.dcd65000000a9p+29)
1000000000.0000202655792236328125 (0x1.dcd65000000aap+29)
1000000000.00002038478851318359375 (0x1.dcd65000000abp+29)
1000000000.000020503997802734375 (0x1.dcd65000000acp+29)
1000000000.00002062320709228515625 (0x1.dcd65000000adp+29)
1000000000.0000207424163818359375 (0x1.dcd65000000aep+29)
1000000000.00002086162567138671875 (0x1.dcd65000000afp+29)
1000000000.0000209808349609375 (0x1.dcd65000000bp+29)
1000000000.00002110004425048828125 (0x1.dcd65000000b1p+29)
1000000000.0000212192535400390625 (0x1.dcd65000000b2p+29)
1000000000.00002133846282958984375 (0x1.dcd65000000b3p+29)
1000000000.000021457672119140625 (0x1.dcd65000000b4p+29)
1000000000.00002157688140869140625 (0x1.dcd65000000b5p+29)
1000000000.0000216960906982421875 (0x1.dcd65000000b6p+29)
1000000000.00002181529998779296875 (0x1.dcd65000000b7p+29)
I'm not an expert in floating point math but hopefully it means that no
type change is required - double precision can handle it.
And since it works correctly on v12 for this particular date may be all
what is needed it to verify that it works for the other dates too! For
example what was changed in v12 (comparing to 11.6 I use) so extract
epoch works correctly?
02.12.2019 01:59, Tom Lane пишет:
Show quoted text
Thomas Munro <thomas.munro@gmail.com> writes:
On Sat, Nov 30, 2019 at 10:28 PM Petr Fedorov <petr.fedorov@phystech.edu> wrote:
Obviously, it is due to the fact that extract epoch returns double
precision which in turn has 15 decimal digits precision.I guess this deviation from the SQL standard ("exact numeric") made
sense when PostgreSQL used double for timestamps, but would break a
lot of queries if we changed it.Hmmm ... well, now that you mention it, would it really break things
if we made it return numeric? There's an implicit cast to float8,
so it seems like queries requiring that type would still work.There might be a performance-related argument against switching,
perhaps.regards, tom lane
On Tue, Dec 3, 2019 at 12:08 AM Petr Fedorov <petr.fedorov@phystech.edu> wrote:
It appears that extract epoch returns double precision, not float8. And
the program below seems to be demonstrating that there are enough
'floating-point numbers' as defined by IEEE-754 to represent
1000000000.000021 precisely enough:
Double precision and float8 are different names for the same type in PostgreSQL.
I'm not an expert in floating point math but hopefully it means that no
type change is required - double precision can handle it.
Me neither, but the SQL standard requires us to use an exact numeric
type, so it's wrong on that level by definition.
It's also wrong because binary floating point numbers can't represent
0.000001 (one microsecond represented as seconds) exactly, and that's
our unit of counting for timestamps. You can get pretty far by
thinking of the decimal number you see on the screen as the true
number and the double as a fuzzy internal storage or transport that
does the job just fine due to the round trip conversion guarantee
provided by DBL_DIG, but the double is still going to have the wrong
value in some cases. As soon as you start doing any arithmetic or
comparisons with the double directly, interesting things can start to
happen to make the error visible and break things; for example
0.1::float8 + 0.2::float8 = 0.3::float8 is false.
And since it works correctly on v12 for this particular date may be all
what is needed it to verify that it works for the other dates too! For
example what was changed in v12 (comparing to 11.6 I use) so extract
epoch works correctly?
PostgreSQL 12 adopted a different algorithm[1]https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=02ddd499322ab6f2f0d58692955dc9633c2150fc for converting float8
to text that can affect how many digits are shown, as Tom explained.
The manual has some notes about it[2]https://www.postgresql.org/docs/12/datatype-numeric.html#DATATYPE-FLOAT.
[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=02ddd499322ab6f2f0d58692955dc9633c2150fc
[2]: https://www.postgresql.org/docs/12/datatype-numeric.html#DATATYPE-FLOAT
On 2019-12-02 23:52, Thomas Munro wrote:
I'm not an expert in floating point math but hopefully it means that no
type change is required - double precision can handle it.Me neither, but the SQL standard requires us to use an exact numeric
type, so it's wrong on that level by definition.
I looked into this (changing the return types of date_part()/extract()
from float8 to numeric).
One problem (other than perhaps performance, tbd.) is that this would no
longer allow processing infinite timestamps, since numeric does not
support infinity. It could be argued that running extract() on infinite
timestamps isn't very useful, but it's something to consider explicitly.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
One problem (other than perhaps performance, tbd.) is that this would no
longer allow processing infinite timestamps, since numeric does not
support infinity. It could be argued that running extract() on infinite
timestamps isn't very useful, but it's something to consider explicitly.
I wonder if it's time to fix that, ie introduce +-Infinity into numeric.c.
This isn't the first time we've seen issues with numeric not being a
superset of float, and it won't be the last.
At first glance there's no free bits in the on-disk format for numeric,
but we could do something by defining the low-order bits of the header
word for a NaN to distinguish between real NaN and +/- infinity.
It looks like those bits should reliably be zero right now.
regards, tom lane
On 5/25/20 3:28 PM, Peter Eisentraut wrote:
On 2019-12-02 23:52, Thomas Munro wrote:
I'm not an expert in floating point math but hopefully it means that no
type change is required - double precision can handle it.Me neither, but the SQL standard requires us to use an exact numeric
type, so it's wrong on that level by definition.I looked into this (changing the return types of date_part()/extract()
from float8 to numeric).
I think what would be better is to have a specific date_part function
for each part and have extract translate to the appropriate one. This
is particularly interesting for epoch but it would also allow us to
return the correct type mandated by the spec.
(I would also accept a specific date_part per return type instead of per
part, that would probably even be better.)
--
Vik Fearing
Vik Fearing <vik@postgresfriends.org> writes:
On 5/25/20 3:28 PM, Peter Eisentraut wrote:
I looked into this (changing the return types of date_part()/extract()
from float8 to numeric).
I think what would be better is to have a specific date_part function
for each part and have extract translate to the appropriate one.
Doesn't really work for upwards compatibility with existing views,
which will have calls to date_part(text, ...) embedded in them.
Actually, now that I think about it, changing the result type of
date_part() is likely to be problematic anyway for such cases.
It's not going to be good if pg_upgrade's dump/restore of a view
results in a new output column type; especially if it's a
materialized view.
So maybe what we'd have to do is leave date_part() alone for
legacy compatibility, and invent new functions that the extract()
syntax would now be translated to. While at it, maybe we could
fix things so that the syntax reverse-lists the same way instead
of injecting Postgres-isms...
regards, tom lane
On 5/25/20 6:40 PM, Tom Lane wrote:
Vik Fearing <vik@postgresfriends.org> writes:
On 5/25/20 3:28 PM, Peter Eisentraut wrote:
I looked into this (changing the return types of date_part()/extract()
from float8 to numeric).I think what would be better is to have a specific date_part function
for each part and have extract translate to the appropriate one.Doesn't really work for upwards compatibility with existing views,
which will have calls to date_part(text, ...) embedded in them.Actually, now that I think about it, changing the result type of
date_part() is likely to be problematic anyway for such cases.
It's not going to be good if pg_upgrade's dump/restore of a view
results in a new output column type; especially if it's a
materialized view.So maybe what we'd have to do is leave date_part() alone for
legacy compatibility, and invent new functions that the extract()
syntax would now be translated to.
I'm sorry, I wasn't clear. I was suggesting adding new functions while
also keeping the current generic function. So exactly what you say in
that last paragraph.
Although <extract expression> has a fixed list of constant parts,
date_part() allows the part to be variable. So we need to keep it
anyway for cases like this contrived example:
SELECT date_part(p, now())
FROM UNNEST(ARRAY['epoch', 'year', 'second']) AS u (p)
While at it, maybe we could
fix things so that the syntax reverse-lists the same way instead
of injecting Postgres-isms...
I'm not sure what this means.
--
Vik Fearing
Vik Fearing <vik@postgresfriends.org> writes:
On 5/25/20 6:40 PM, Tom Lane wrote:
While at it, maybe we could
fix things so that the syntax reverse-lists the same way instead
of injecting Postgres-isms...
I'm not sure what this means.
This:
regression=# create view myview as select extract(year from current_timestamp) as y;
CREATE VIEW
regression=# \d+ myview
...
View definition:
SELECT date_part('year'::text, CURRENT_TIMESTAMP) AS y;
What had been a 100% spec-compliant view definition is now quite
Postgres-specific. I fixed some similar problems in 0bb51aa96 (before
that, the CURRENT_TIMESTAMP part would've reverse-listed differently
too); but I didn't tackle EXTRACT(), SUBSTRING(), and other cases.
I'm not claiming that we really need to fix all of those. But if we are
going to pick nits about which data type EXTRACT() returns then I think
it's legit to worry about its reverse-list representation at the same
time ... especially if we must touch the grammar's translation anyway.
regards, tom lane
I wrote:
What had been a 100% spec-compliant view definition is now quite
Postgres-specific. I fixed some similar problems in 0bb51aa96 (before
that, the CURRENT_TIMESTAMP part would've reverse-listed differently
too); but I didn't tackle EXTRACT(), SUBSTRING(), and other cases.
I'm not claiming that we really need to fix all of those. But if we are
going to pick nits about which data type EXTRACT() returns then I think
it's legit to worry about its reverse-list representation at the same
time ... especially if we must touch the grammar's translation anyway.
BTW, shortly after sending that I had an idea about how to do it without
adding a boatload of new parsetree infrastructure, which has been the
main reason why nobody has wanted to tackle it. The obvious way to do
this is to make a new kind of expression node, but that cascades into
lots and lots of places (see 0bb51aa96, plus the later commits that
fixed oversights in it :-(). It's a lot of work for a mostly-cosmetic
issue.
However: suppose that we continue to translate these things into FuncExpr
nodes, the same as always, but we add a new CoercionForm variant, say
COERCE_SQL_SYNTAX. 99% of the system ignores FuncExpr.funcformat,
and would continue to do so, but ruleutils.c would take it to mean
that (1) the call should be reverse-listed as some special SQL syntax
and (2) the funcid is one of a small set of built-in functions for
which ruleutils.c knows what to emit. (If it doesn't recognize the
funcid, it could either throw an error, or fall back to normal display
of the node.) For cases such as EXTRACT, this would also represent
a promise that specific arguments are Const nodes from which the
desired keyword can be extracted.
This is kind of an abuse of "CoercionForm", since that typedef name
implies that it only talks about how to handle cast cases, but
semantically it's always been a how-to-display-function-calls thing.
We could either hold our noses about that or rename the typedef.
If we went this way then we could easily clean up most of the other
weird-SQL-syntax function call cases, incrementally over time,
without a lot of additional work.
regards, tom lane
On Mon, May 25, 2020 at 09:43:32AM -0400, Tom Lane wrote:
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
One problem (other than perhaps performance, tbd.) is that this would no
longer allow processing infinite timestamps, since numeric does not
support infinity. It could be argued that running extract() on infinite
timestamps isn't very useful, but it's something to consider explicitly.I wonder if it's time to fix that, ie introduce +-Infinity into numeric.c.
This isn't the first time we've seen issues with numeric not being a
superset of float, and it won't be the last.At first glance there's no free bits in the on-disk format for numeric,
but we could do something by defining the low-order bits of the header
word for a NaN to distinguish between real NaN and +/- infinity.
It looks like those bits should reliably be zero right now.
+1 for adding +/- infinity to NUMERIC.
Best,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778
Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
On 2020-05-25 15:28, Peter Eisentraut wrote:
On 2019-12-02 23:52, Thomas Munro wrote:
I'm not an expert in floating point math but hopefully it means that no
type change is required - double precision can handle it.Me neither, but the SQL standard requires us to use an exact numeric
type, so it's wrong on that level by definition.I looked into this (changing the return types of date_part()/extract()
from float8 to numeric).One problem (other than perhaps performance, tbd.) is that this would no
longer allow processing infinite timestamps, since numeric does not
support infinity. It could be argued that running extract() on infinite
timestamps isn't very useful, but it's something to consider explicitly.
Now that numeric supports infinity, here is a patch that changes the
return types of date_part() to numeric. It's not meant to be a final
version, but it is useful for discussing a few things.
The internal implementation could be made a bit more elegant if we had
variants of int4_numeric() and int8_numeric() that don't have to go
through fmgr. This would also help in other areas of the code. There
are probably also other ways in which the internals could be made more
compact; I just converted them fairly directly.
When extracting seconds or microseconds, I made it always produce 6 or 3
decimal places, even if they are zero. I don't know if we want that or
what behavior we want. That's what all the changes in the regression
tests are about. Everything else passes unchanged.
The 'julian' field is a bit of a mystery. First of all it's not
documented. The regression tests only test the rounded output, perhaps
to avoid floating point differences. When you do date_part('julian',
date), then you get a correct Julian Day. But date_part('julian',
timestamp[tz]) gives incorrect Julian Date values that are off by 12
hours. My patch doesn't change that, I just noticed when I took away
the round() call in the regression tests. Those calls now produce a
different number of decimal places.
It might make sense to make date_part(..., date) a separate C function
instead of an SQL wrapper around date_part(..., timestamp). That could
return integer and could reject nonsensical fields such as "minute".
Then we could also make a less contorted implementation of
date_part('julian', date) that matches to_char(date, 'J') and remove the
incorrect implementation of date_part('julian', timestamp).
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachments:
v1-0001-Change-return-type-of-EXTRACT-to-numeric.patchtext/plain; charset=UTF-8; name=v1-0001-Change-return-type-of-EXTRACT-to-numeric.patch; x-mac-creator=0; x-mac-type=0Download
From 823563feefef8d9a658fd0d586676c3aa2e3ca74 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter@eisentraut.org>
Date: Tue, 4 Aug 2020 15:46:51 +0200
Subject: [PATCH v1] Change return type of EXTRACT to numeric
Discussion: https://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu
---
doc/src/sgml/func.sgml | 10 +-
src/backend/utils/adt/date.c | 113 ++++++--
src/backend/utils/adt/timestamp.c | 339 +++++++++++++++-------
src/include/catalog/pg_proc.dat | 14 +-
src/test/regress/expected/date.out | 4 +-
src/test/regress/expected/interval.out | 24 +-
src/test/regress/expected/timestamp.out | 268 ++++++++---------
src/test/regress/expected/timestamptz.out | 272 ++++++++---------
8 files changed, 619 insertions(+), 425 deletions(-)
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index f766c1bc67..f8d6ad62be 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -8626,7 +8626,7 @@ <title>Date/Time Functions</title>
<primary>date_part</primary>
</indexterm>
<function>date_part</function> ( <type>text</type>, <type>timestamp</type> )
- <returnvalue>double precision</returnvalue>
+ <returnvalue>numeric</returnvalue>
</para>
<para>
Get timestamp subfield (equivalent to <function>extract</function>);
@@ -8641,7 +8641,7 @@ <title>Date/Time Functions</title>
<row>
<entry role="func_table_entry"><para role="func_signature">
<function>date_part</function> ( <type>text</type>, <type>interval</type> )
- <returnvalue>double precision</returnvalue>
+ <returnvalue>numeric</returnvalue>
</para>
<para>
Get interval subfield (equivalent to <function>extract</function>);
@@ -8706,7 +8706,7 @@ <title>Date/Time Functions</title>
<primary>extract</primary>
</indexterm>
<function>extract</function> ( <parameter>field</parameter> <literal>from</literal> <type>timestamp</type> )
- <returnvalue>double precision</returnvalue>
+ <returnvalue>numeric</returnvalue>
</para>
<para>
Get timestamp subfield; see <xref linkend="functions-datetime-extract"/>
@@ -8720,7 +8720,7 @@ <title>Date/Time Functions</title>
<row>
<entry role="func_table_entry"><para role="func_signature">
<function>extract</function> ( <parameter>field</parameter> <literal>from</literal> <type>interval</type> )
- <returnvalue>double precision</returnvalue>
+ <returnvalue>numeric</returnvalue>
</para>
<para>
Get interval subfield; see <xref linkend="functions-datetime-extract"/>
@@ -9227,7 +9227,7 @@ <title><function>EXTRACT</function>, <function>date_part</function></title>
well.) <replaceable>field</replaceable> is an identifier or
string that selects what field to extract from the source value.
The <function>extract</function> function returns values of type
- <type>double precision</type>.
+ <type>numeric</type>.
The following are valid field names:
<!-- alphabetical -->
diff --git a/src/backend/utils/adt/date.c b/src/backend/utils/adt/date.c
index eaaffa7137..8027930c8f 100644
--- a/src/backend/utils/adt/date.c
+++ b/src/backend/utils/adt/date.c
@@ -31,6 +31,7 @@
#include "utils/builtins.h"
#include "utils/date.h"
#include "utils/datetime.h"
+#include "utils/numeric.h"
#include "utils/sortsupport.h"
/*
@@ -1986,7 +1987,7 @@ time_part(PG_FUNCTION_ARGS)
{
text *units = PG_GETARG_TEXT_PP(0);
TimeADT time = PG_GETARG_TIMEADT(1);
- float8 result;
+ Numeric result;
int type,
val;
char *lowunits;
@@ -2010,23 +2011,49 @@ time_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_MICROSEC:
- result = tm->tm_sec * 1000000.0 + fsec;
+ /* tm->tm_sec * 1000000 + fsec */
+ result = numeric_add_opt_error(
+ numeric_mul_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_sec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000000))),
+ NULL),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(fsec))),
+ NULL);
break;
case DTK_MILLISEC:
- result = tm->tm_sec * 1000.0 + fsec / 1000.0;
+ /* tm->tm_sec * 1000 + fsec / 1000 */
+ result = numeric_add_opt_error(
+ numeric_mul_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_sec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000))),
+ NULL),
+ numeric_div_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(fsec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000))),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(3)));
break;
case DTK_SECOND:
- result = tm->tm_sec + fsec / 1000000.0;
+ /* tm->tm_sec + fsec / 1000000 */
+ result = numeric_add_opt_error(
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_sec))),
+ numeric_div_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(fsec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000000))),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
break;
case DTK_MINUTE:
- result = tm->tm_min;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_min)));
break;
case DTK_HOUR:
- result = tm->tm_hour;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_hour)));
break;
case DTK_TZ:
@@ -2050,7 +2077,12 @@ time_part(PG_FUNCTION_ARGS)
}
else if (type == RESERV && val == DTK_EPOCH)
{
- result = time / 1000000.0;
+ result = numeric_div_opt_error(DatumGetNumeric(DirectFunctionCall1(int8_numeric, Int64GetDatum(time))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000000))),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
}
else
{
@@ -2061,7 +2093,7 @@ time_part(PG_FUNCTION_ARGS)
result = 0;
}
- PG_RETURN_FLOAT8(result);
+ PG_RETURN_NUMERIC(result);
}
@@ -2723,7 +2755,7 @@ timetz_part(PG_FUNCTION_ARGS)
{
text *units = PG_GETARG_TEXT_PP(0);
TimeTzADT *time = PG_GETARG_TIMETZADT_P(1);
- float8 result;
+ Numeric result;
int type,
val;
char *lowunits;
@@ -2738,7 +2770,6 @@ timetz_part(PG_FUNCTION_ARGS)
if (type == UNITS)
{
- double dummy;
int tz;
fsec_t fsec;
struct pg_tm tt,
@@ -2749,38 +2780,65 @@ timetz_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_TZ:
- result = -tz;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(-tz)));
break;
case DTK_TZ_MINUTE:
- result = -tz;
- result /= SECS_PER_MINUTE;
- FMODULO(result, dummy, (double) SECS_PER_MINUTE);
+ /* trunc(-tz / 60) % 60 */
+ result = numeric_mod_opt_error(
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(-tz / SECS_PER_MINUTE))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(SECS_PER_MINUTE))),
+ NULL);
break;
case DTK_TZ_HOUR:
- dummy = -tz;
- FMODULO(dummy, result, (double) SECS_PER_HOUR);
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(-tz / SECS_PER_HOUR)));
break;
case DTK_MICROSEC:
- result = tm->tm_sec * 1000000.0 + fsec;
+ /* tm->tm_sec * 1000000 + fsec */
+ result = numeric_add_opt_error(
+ numeric_mul_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_sec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000000))),
+ NULL),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(fsec))),
+ NULL);
break;
case DTK_MILLISEC:
- result = tm->tm_sec * 1000.0 + fsec / 1000.0;
+ /* tm->tm_sec * 1000 + fsec / 1000 */
+ result = numeric_add_opt_error(
+ numeric_mul_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_sec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000))),
+ NULL),
+ numeric_div_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(fsec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000))),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(3)));
break;
case DTK_SECOND:
- result = tm->tm_sec + fsec / 1000000.0;
+ /* tm->tm_sec + fsec / 1000000 */
+ result = numeric_add_opt_error(
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_sec))),
+ numeric_div_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(fsec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000000))),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
break;
case DTK_MINUTE:
- result = tm->tm_min;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_min)));
break;
case DTK_HOUR:
- result = tm->tm_hour;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_hour)));
break;
case DTK_DAY:
@@ -2800,7 +2858,16 @@ timetz_part(PG_FUNCTION_ARGS)
}
else if (type == RESERV && val == DTK_EPOCH)
{
- result = time->time / 1000000.0 + time->zone;
+ /* time->time / 1000000.0 + time->zone */
+ result = numeric_add_opt_error(
+ numeric_div_opt_error(DatumGetNumeric(DirectFunctionCall1(int8_numeric, Int64GetDatum(time->time))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000000))),
+ NULL),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(time->zone))),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
}
else
{
@@ -2811,7 +2878,7 @@ timetz_part(PG_FUNCTION_ARGS)
result = 0;
}
- PG_RETURN_FLOAT8(result);
+ PG_RETURN_NUMERIC(result);
}
/* timetz_zone()
diff --git a/src/backend/utils/adt/timestamp.c b/src/backend/utils/adt/timestamp.c
index 5fe304cea7..51a2aa1819 100644
--- a/src/backend/utils/adt/timestamp.c
+++ b/src/backend/utils/adt/timestamp.c
@@ -35,6 +35,7 @@
#include "utils/date.h"
#include "utils/datetime.h"
#include "utils/float.h"
+#include "utils/numeric.h"
/*
* gcc's -ffast-math switch breaks routines that expect exact results from
@@ -4436,13 +4437,13 @@ date2isoyearday(int year, int mon, int mday)
*
* Used by timestamp_part and timestamptz_part when extracting from infinite
* timestamp[tz]. Returns +/-Infinity if that is the appropriate result,
- * otherwise returns zero (which should be taken as meaning to return NULL).
+ * otherwise returns NULL (which should be taken as meaning to return SQL NULL).
*
* Errors thrown here for invalid units should exactly match those that
* would be thrown in the calling functions, else there will be unexpected
* discrepancies between finite- and infinite-input cases.
*/
-static float8
+static Numeric
NonFiniteTimestampTzPart(int type, int unit, char *lowunits,
bool isNegative, bool isTz)
{
@@ -4478,7 +4479,7 @@ NonFiniteTimestampTzPart(int type, int unit, char *lowunits,
case DTK_TZ:
case DTK_TZ_MINUTE:
case DTK_TZ_HOUR:
- return 0.0;
+ return NULL;
/* Monotonically-increasing units */
case DTK_YEAR:
@@ -4489,9 +4490,15 @@ NonFiniteTimestampTzPart(int type, int unit, char *lowunits,
case DTK_ISOYEAR:
case DTK_EPOCH:
if (isNegative)
- return -get_float8_infinity();
+ return DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("-Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1)));
else
- return get_float8_infinity();
+ return DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1)));
default:
if (isTz)
@@ -4504,7 +4511,7 @@ NonFiniteTimestampTzPart(int type, int unit, char *lowunits,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("timestamp units \"%s\" not supported",
lowunits)));
- return 0.0; /* keep compiler quiet */
+ return NULL; /* keep compiler quiet */
}
}
@@ -4516,7 +4523,7 @@ timestamp_part(PG_FUNCTION_ARGS)
{
text *units = PG_GETARG_TEXT_PP(0);
Timestamp timestamp = PG_GETARG_TIMESTAMP(1);
- float8 result;
+ Numeric result;
Timestamp epoch;
int type,
val;
@@ -4539,7 +4546,7 @@ timestamp_part(PG_FUNCTION_ARGS)
TIMESTAMP_IS_NOBEGIN(timestamp),
false);
if (result)
- PG_RETURN_FLOAT8(result);
+ PG_RETURN_NUMERIC(result);
else
PG_RETURN_NULL();
}
@@ -4554,47 +4561,73 @@ timestamp_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_MICROSEC:
- result = tm->tm_sec * 1000000.0 + fsec;
+ /* tm->tm_sec * 1000000 + fsec */
+ result = numeric_add_opt_error(
+ numeric_mul_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_sec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000000))),
+ NULL),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(fsec))),
+ NULL);
break;
case DTK_MILLISEC:
- result = tm->tm_sec * 1000.0 + fsec / 1000.0;
+ /* tm->tm_sec * 1000 + fsec / 1000 */
+ result = numeric_add_opt_error(
+ numeric_mul_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_sec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000))),
+ NULL),
+ numeric_div_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(fsec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000))),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(3)));
break;
case DTK_SECOND:
- result = tm->tm_sec + fsec / 1000000.0;
+ /* tm->tm_sec + fsec / 1000000 */
+ result = numeric_add_opt_error(
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_sec))),
+ numeric_div_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(fsec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000000))),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
break;
case DTK_MINUTE:
- result = tm->tm_min;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_min)));
break;
case DTK_HOUR:
- result = tm->tm_hour;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_hour)));
break;
case DTK_DAY:
- result = tm->tm_mday;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_mday)));
break;
case DTK_MONTH:
- result = tm->tm_mon;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_mon)));
break;
case DTK_QUARTER:
- result = (tm->tm_mon - 1) / 3 + 1;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum((tm->tm_mon - 1) / 3 + 1)));
break;
case DTK_WEEK:
- result = (float8) date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday))));
break;
case DTK_YEAR:
if (tm->tm_year > 0)
- result = tm->tm_year;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_year)));
else
/* there is no year 0, just 1 BC and 1 AD */
- result = tm->tm_year - 1;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_year - 1)));
break;
case DTK_DECADE:
@@ -4605,9 +4638,9 @@ timestamp_part(PG_FUNCTION_ARGS)
* is 11 BC thru 2 BC...
*/
if (tm->tm_year >= 0)
- result = tm->tm_year / 10;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_year / 10)));
else
- result = -((8 - (tm->tm_year - 1)) / 10);
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(-((8 - (tm->tm_year - 1)) / 10))));
break;
case DTK_CENTURY:
@@ -4619,44 +4652,57 @@ timestamp_part(PG_FUNCTION_ARGS)
* ----
*/
if (tm->tm_year > 0)
- result = (tm->tm_year + 99) / 100;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum((tm->tm_year + 99) / 100)));
else
/* caution: C division may have negative remainder */
- result = -((99 - (tm->tm_year - 1)) / 100);
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(-((99 - (tm->tm_year - 1)) / 100))));
break;
case DTK_MILLENNIUM:
/* see comments above. */
if (tm->tm_year > 0)
- result = (tm->tm_year + 999) / 1000;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum((tm->tm_year + 999) / 1000)));
else
- result = -((999 - (tm->tm_year - 1)) / 1000);
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(-((999 - (tm->tm_year - 1)) / 1000))));
break;
case DTK_JULIAN:
- result = date2j(tm->tm_year, tm->tm_mon, tm->tm_mday);
- result += ((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) +
- tm->tm_sec + (fsec / 1000000.0)) / (double) SECS_PER_DAY;
+ result = numeric_add_opt_error(
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)))),
+ numeric_div_opt_error(
+ DatumGetNumeric(DirectFunctionCall1(int8_numeric, Int64GetDatum((((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) + tm->tm_sec) * 1000000LL + fsec)))),
+ DatumGetNumeric(DirectFunctionCall1(int8_numeric, Int64GetDatum(SECS_PER_DAY * 1000000LL))),
+ NULL),
+ NULL);
break;
case DTK_ISOYEAR:
- result = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ {
+ int tmp = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
/* Adjust BC years */
- if (result <= 0)
- result -= 1;
+ if (tmp <= 0)
+ tmp -= 1;
+ result = DatumGetNumeric(DirectFunctionCall1(int8_numeric, Int64GetDatum(tmp)));
break;
+ }
case DTK_DOW:
case DTK_ISODOW:
- result = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
- if (val == DTK_ISODOW && result == 0)
- result = 7;
+ {
+ int tmp = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
+ if (val == DTK_ISODOW && tmp == 0)
+ tmp = 7;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tmp)));
break;
+ }
case DTK_DOY:
- result = (date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
- - date2j(tm->tm_year, 1, 1) + 1);
+ {
+ int tmp = (date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
+ - date2j(tm->tm_year, 1, 1) + 1);
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tmp)));
break;
+ }
case DTK_TZ:
case DTK_TZ_MINUTE:
@@ -4675,11 +4721,16 @@ timestamp_part(PG_FUNCTION_ARGS)
{
case DTK_EPOCH:
epoch = SetEpochTimestamp();
- /* try to avoid precision loss in subtraction */
- if (timestamp < (PG_INT64_MAX + epoch))
- result = (timestamp - epoch) / 1000000.0;
- else
- result = ((float8) timestamp - epoch) / 1000000.0;
+ /* (timestamp - epoch) / 1000000 */
+ result = numeric_div_opt_error(
+ numeric_sub_opt_error(DatumGetNumeric(DirectFunctionCall1(int8_numeric, Int64GetDatum(timestamp))),
+ DatumGetNumeric(DirectFunctionCall1(int8_numeric, Int64GetDatum(epoch))),
+ NULL),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000000))),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
break;
default:
@@ -4699,7 +4750,7 @@ timestamp_part(PG_FUNCTION_ARGS)
result = 0;
}
- PG_RETURN_FLOAT8(result);
+ PG_RETURN_NUMERIC(result);
}
/* timestamptz_part()
@@ -4710,13 +4761,12 @@ timestamptz_part(PG_FUNCTION_ARGS)
{
text *units = PG_GETARG_TEXT_PP(0);
TimestampTz timestamp = PG_GETARG_TIMESTAMPTZ(1);
- float8 result;
+ Numeric result;
Timestamp epoch;
int tz;
int type,
val;
char *lowunits;
- double dummy;
fsec_t fsec;
struct pg_tm tt,
*tm = &tt;
@@ -4735,7 +4785,7 @@ timestamptz_part(PG_FUNCTION_ARGS)
TIMESTAMP_IS_NOBEGIN(timestamp),
true);
if (result)
- PG_RETURN_FLOAT8(result);
+ PG_RETURN_NUMERIC(result);
else
PG_RETURN_NULL();
}
@@ -4750,112 +4800,152 @@ timestamptz_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_TZ:
- result = -tz;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(-tz)));
break;
case DTK_TZ_MINUTE:
- result = -tz;
- result /= MINS_PER_HOUR;
- FMODULO(result, dummy, (double) MINS_PER_HOUR);
+ /* trunc(-tz / 60) % 60 */
+ result = numeric_mod_opt_error(
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(-tz / SECS_PER_MINUTE))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(SECS_PER_MINUTE))),
+ NULL);
break;
case DTK_TZ_HOUR:
- dummy = -tz;
- FMODULO(dummy, result, (double) SECS_PER_HOUR);
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(-tz / SECS_PER_HOUR)));
break;
case DTK_MICROSEC:
- result = tm->tm_sec * 1000000.0 + fsec;
+ /* tm->tm_sec * 1000000 + fsec */
+ result = numeric_add_opt_error(
+ numeric_mul_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_sec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000000))),
+ NULL),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(fsec))),
+ NULL);
break;
case DTK_MILLISEC:
- result = tm->tm_sec * 1000.0 + fsec / 1000.0;
+ /* tm->tm_sec * 1000 + fsec / 1000 */
+ result = numeric_add_opt_error(
+ numeric_mul_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_sec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000))),
+ NULL),
+ numeric_div_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(fsec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000))),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(3)));
break;
case DTK_SECOND:
- result = tm->tm_sec + fsec / 1000000.0;
+ /* tm->tm_sec + fsec / 1000000 */
+ result = numeric_add_opt_error(
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_sec))),
+ numeric_div_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(fsec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000000))),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
break;
case DTK_MINUTE:
- result = tm->tm_min;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_min)));
break;
case DTK_HOUR:
- result = tm->tm_hour;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_hour)));
break;
case DTK_DAY:
- result = tm->tm_mday;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_mday)));
break;
case DTK_MONTH:
- result = tm->tm_mon;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_mon)));
break;
case DTK_QUARTER:
- result = (tm->tm_mon - 1) / 3 + 1;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum((tm->tm_mon - 1) / 3 + 1)));
break;
case DTK_WEEK:
- result = (float8) date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday))));
break;
case DTK_YEAR:
if (tm->tm_year > 0)
- result = tm->tm_year;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_year)));
else
/* there is no year 0, just 1 BC and 1 AD */
- result = tm->tm_year - 1;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_year - 1)));
break;
case DTK_DECADE:
/* see comments in timestamp_part */
if (tm->tm_year > 0)
- result = tm->tm_year / 10;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_year / 10)));
else
- result = -((8 - (tm->tm_year - 1)) / 10);
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(-((8 - (tm->tm_year - 1)) / 10))));
break;
case DTK_CENTURY:
/* see comments in timestamp_part */
if (tm->tm_year > 0)
- result = (tm->tm_year + 99) / 100;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum((tm->tm_year + 99) / 100)));
else
- result = -((99 - (tm->tm_year - 1)) / 100);
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(-((99 - (tm->tm_year - 1)) / 100))));
break;
case DTK_MILLENNIUM:
/* see comments in timestamp_part */
if (tm->tm_year > 0)
- result = (tm->tm_year + 999) / 1000;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum((tm->tm_year + 999) / 1000)));
else
- result = -((999 - (tm->tm_year - 1)) / 1000);
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(-((999 - (tm->tm_year - 1)) / 1000))));
break;
case DTK_JULIAN:
- result = date2j(tm->tm_year, tm->tm_mon, tm->tm_mday);
- result += ((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) +
- tm->tm_sec + (fsec / 1000000.0)) / (double) SECS_PER_DAY;
+ result = numeric_add_opt_error(
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)))),
+ numeric_div_opt_error(
+ DatumGetNumeric(DirectFunctionCall1(int8_numeric, Int64GetDatum((((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) + tm->tm_sec) * 1000000LL + fsec)))),
+ DatumGetNumeric(DirectFunctionCall1(int8_numeric, Int64GetDatum(SECS_PER_DAY * 1000000LL))),
+ NULL),
+ NULL);
break;
case DTK_ISOYEAR:
- result = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ {
+ int tmp = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
/* Adjust BC years */
- if (result <= 0)
- result -= 1;
+ if (tmp <= 0)
+ tmp -= 1;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tmp)));
break;
+ }
case DTK_DOW:
case DTK_ISODOW:
- result = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
- if (val == DTK_ISODOW && result == 0)
- result = 7;
+ {
+ int tmp = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
+ if (val == DTK_ISODOW && tmp == 0)
+ tmp = 7;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tmp)));
break;
+ }
case DTK_DOY:
- result = (date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
- - date2j(tm->tm_year, 1, 1) + 1);
+ {
+ int tmp = (date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
+ - date2j(tm->tm_year, 1, 1) + 1);
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tmp)));
break;
+ }
default:
ereport(ERROR,
@@ -4872,11 +4962,16 @@ timestamptz_part(PG_FUNCTION_ARGS)
{
case DTK_EPOCH:
epoch = SetEpochTimestamp();
- /* try to avoid precision loss in subtraction */
- if (timestamp < (PG_INT64_MAX + epoch))
- result = (timestamp - epoch) / 1000000.0;
- else
- result = ((float8) timestamp - epoch) / 1000000.0;
+ /* (timestamp - epoch) / 1000000 */
+ result = numeric_div_opt_error(
+ numeric_sub_opt_error(DatumGetNumeric(DirectFunctionCall1(int8_numeric, Int64GetDatum(timestamp))),
+ DatumGetNumeric(DirectFunctionCall1(int8_numeric, Int64GetDatum(epoch))),
+ NULL),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000000))),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
break;
default:
@@ -4897,7 +4992,7 @@ timestamptz_part(PG_FUNCTION_ARGS)
result = 0;
}
- PG_RETURN_FLOAT8(result);
+ PG_RETURN_NUMERIC(result);
}
@@ -4909,7 +5004,7 @@ interval_part(PG_FUNCTION_ARGS)
{
text *units = PG_GETARG_TEXT_PP(0);
Interval *interval = PG_GETARG_INTERVAL_P(1);
- float8 result;
+ Numeric result;
int type,
val;
char *lowunits;
@@ -4932,54 +5027,77 @@ interval_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_MICROSEC:
- result = tm->tm_sec * 1000000.0 + fsec;
+ /* tm->tm_sec * 1000000 + fsec */
+ result = numeric_add_opt_error(
+ numeric_mul_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_sec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000000))),
+ NULL),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(fsec))),
+ NULL);
break;
case DTK_MILLISEC:
- result = tm->tm_sec * 1000.0 + fsec / 1000.0;
+ /* tm->tm_sec * 1000 + fsec / 1000 */
+ result = numeric_add_opt_error(
+ numeric_mul_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_sec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000))),
+ NULL),
+ numeric_div_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(fsec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000))),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(3)));
break;
case DTK_SECOND:
- result = tm->tm_sec + fsec / 1000000.0;
+ /* tm->tm_sec + fsec / 1000000 */
+ result = numeric_add_opt_error(
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_sec))),
+ numeric_div_opt_error(DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(fsec))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000000))),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
break;
case DTK_MINUTE:
- result = tm->tm_min;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_min)));
break;
case DTK_HOUR:
- result = tm->tm_hour;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_hour)));
break;
case DTK_DAY:
- result = tm->tm_mday;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_mday)));
break;
case DTK_MONTH:
- result = tm->tm_mon;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_mon)));
break;
case DTK_QUARTER:
- result = (tm->tm_mon / 3) + 1;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum((tm->tm_mon / 3) + 1)));
break;
case DTK_YEAR:
- result = tm->tm_year;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_year)));
break;
case DTK_DECADE:
- /* caution: C division may have negative remainder */
- result = tm->tm_year / 10;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_year / 10)));
break;
case DTK_CENTURY:
- /* caution: C division may have negative remainder */
- result = tm->tm_year / 100;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_year / 100)));
break;
case DTK_MILLENNIUM:
- /* caution: C division may have negative remainder */
- result = tm->tm_year / 1000;
+ result = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(tm->tm_year / 1000)));
break;
default:
@@ -4999,10 +5117,19 @@ interval_part(PG_FUNCTION_ARGS)
}
else if (type == RESERV && val == DTK_EPOCH)
{
- result = interval->time / 1000000.0;
- result += ((double) DAYS_PER_YEAR * SECS_PER_DAY) * (interval->month / MONTHS_PER_YEAR);
- result += ((double) DAYS_PER_MONTH * SECS_PER_DAY) * (interval->month % MONTHS_PER_YEAR);
- result += ((double) SECS_PER_DAY) * interval->day;
+ result =
+ numeric_add_opt_error(
+ numeric_div_opt_error(DatumGetNumeric(DirectFunctionCall1(int8_numeric, Int64GetDatum(interval->time))),
+ DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(1000000))),
+ NULL),
+ DatumGetNumeric(DirectFunctionCall1(int8_numeric,
+ Int64GetDatum(((int64) DAYS_PER_YEAR * SECS_PER_DAY) * (interval->month / MONTHS_PER_YEAR) +
+ ((int64) DAYS_PER_MONTH * SECS_PER_DAY) * (interval->month % MONTHS_PER_YEAR) +
+ ((int64) SECS_PER_DAY) * interval->day))),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
}
else
{
@@ -5013,7 +5140,7 @@ interval_part(PG_FUNCTION_ARGS)
result = 0;
}
- PG_RETURN_FLOAT8(result);
+ PG_RETURN_NUMERIC(result);
}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 082a11f270..96c9d7bc0d 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -2315,10 +2315,10 @@
proname => 'interval_mi', prorettype => 'interval',
proargtypes => 'interval interval', prosrc => 'interval_mi' },
{ oid => '1171', descr => 'extract field from timestamp with time zone',
- proname => 'date_part', provolatile => 's', prorettype => 'float8',
+ proname => 'date_part', provolatile => 's', prorettype => 'numeric',
proargtypes => 'text timestamptz', prosrc => 'timestamptz_part' },
{ oid => '1172', descr => 'extract field from interval',
- proname => 'date_part', prorettype => 'float8',
+ proname => 'date_part', prorettype => 'numeric',
proargtypes => 'text interval', prosrc => 'interval_part' },
{ oid => '1174', descr => 'convert date to timestamp with time zone',
proname => 'timestamptz', provolatile => 's', prorettype => 'timestamptz',
@@ -2465,7 +2465,7 @@
proname => 'datetime_pl', prorettype => 'timestamp',
proargtypes => 'date time', prosrc => 'datetime_timestamp' },
{ oid => '1273', descr => 'extract field from time with time zone',
- proname => 'date_part', prorettype => 'float8', proargtypes => 'text timetz',
+ proname => 'date_part', prorettype => 'numeric', proargtypes => 'text timetz',
prosrc => 'timetz_part' },
{ oid => '1274',
proname => 'int84pl', prorettype => 'int8', proargtypes => 'int8 int4',
@@ -2812,11 +2812,11 @@
prosrc => 'textlen' },
{ oid => '1384', descr => 'extract field from date',
- proname => 'date_part', prolang => 'sql', prorettype => 'float8',
+ proname => 'date_part', prolang => 'sql', prorettype => 'numeric',
proargtypes => 'text date',
- prosrc => 'select pg_catalog.date_part($1, cast($2 as timestamp without time zone))' },
+ prosrc => 'select round(pg_catalog.date_part($1, cast($2 as timestamp without time zone)), 0)' },
{ oid => '1385', descr => 'extract field from time',
- proname => 'date_part', prorettype => 'float8', proargtypes => 'text time',
+ proname => 'date_part', prorettype => 'numeric', proargtypes => 'text time',
prosrc => 'time_part' },
{ oid => '1386',
descr => 'date difference from today preserving months and years',
@@ -5721,7 +5721,7 @@
proname => 'date_trunc', prorettype => 'timestamp',
proargtypes => 'text timestamp', prosrc => 'timestamp_trunc' },
{ oid => '2021', descr => 'extract field from timestamp',
- proname => 'date_part', prorettype => 'float8',
+ proname => 'date_part', prorettype => 'numeric',
proargtypes => 'text timestamp', prosrc => 'timestamp_part' },
{ oid => '2024', descr => 'convert date to timestamp',
proname => 'timestamp', prorettype => 'timestamp', proargtypes => 'date',
diff --git a/src/test/regress/expected/date.out b/src/test/regress/expected/date.out
index 4cdf1635f2..281212aefe 100644
--- a/src/test/regress/expected/date.out
+++ b/src/test/regress/expected/date.out
@@ -932,13 +932,13 @@ SELECT EXTRACT(EPOCH FROM DATE '1970-01-01'); -- 0
SELECT EXTRACT(EPOCH FROM TIMESTAMP '1970-01-01'); -- 0
date_part
-----------
- 0
+ 0.000000
(1 row)
SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00'); -- 0
date_part
-----------
- 0
+ 0.000000
(1 row)
--
diff --git a/src/test/regress/expected/interval.out b/src/test/regress/expected/interval.out
index fde4be5271..274a3714cf 100644
--- a/src/test/regress/expected/interval.out
+++ b/src/test/regress/expected/interval.out
@@ -948,18 +948,18 @@ SELECT f1,
EXTRACT(MILLENNIUM FROM f1) AS MILLENNIUM,
EXTRACT(EPOCH FROM f1) AS EPOCH
FROM INTERVAL_TBL;
- f1 | microsecond | millisecond | second | minute | hour | day | month | quarter | year | decade | century | millennium | epoch
--------------------------------+-------------+-------------+--------+--------+------+-----+-------+---------+------+--------+---------+------------+------------
- @ 1 min | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 60
- @ 5 hours | 0 | 0 | 0 | 0 | 5 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 18000
- @ 10 days | 0 | 0 | 0 | 0 | 0 | 10 | 0 | 1 | 0 | 0 | 0 | 0 | 864000
- @ 34 years | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 34 | 3 | 0 | 0 | 1072958400
- @ 3 mons | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 2 | 0 | 0 | 0 | 0 | 7776000
- @ 14 secs ago | -14000000 | -14000 | -14 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | -14
- @ 1 day 2 hours 3 mins 4 secs | 4000000 | 4000 | 4 | 3 | 2 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 93784
- @ 6 years | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 6 | 0 | 0 | 0 | 189345600
- @ 5 mons | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 12960000
- @ 5 mons 12 hours | 0 | 0 | 0 | 0 | 12 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 13003200
+ f1 | microsecond | millisecond | second | minute | hour | day | month | quarter | year | decade | century | millennium | epoch
+-------------------------------+-------------+-------------+------------+--------+------+-----+-------+---------+------+--------+---------+------------+-------------------
+ @ 1 min | 0 | 0.000 | 0.000000 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 60.000000
+ @ 5 hours | 0 | 0.000 | 0.000000 | 0 | 5 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 18000.000000
+ @ 10 days | 0 | 0.000 | 0.000000 | 0 | 0 | 10 | 0 | 1 | 0 | 0 | 0 | 0 | 864000.000000
+ @ 34 years | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 0 | 1 | 34 | 3 | 0 | 0 | 1072224000.000000
+ @ 3 mons | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 3 | 2 | 0 | 0 | 0 | 0 | 7776000.000000
+ @ 14 secs ago | -14000000 | -14000.000 | -14.000000 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | -14.000000
+ @ 1 day 2 hours 3 mins 4 secs | 4000000 | 4000.000 | 4.000000 | 3 | 2 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 93784.000000
+ @ 6 years | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 0 | 1 | 6 | 0 | 0 | 0 | 189216000.000000
+ @ 5 mons | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 12960000.000000
+ @ 5 mons 12 hours | 0 | 0.000 | 0.000000 | 0 | 12 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 13003200.000000
(10 rows)
SELECT EXTRACT(FORTNIGHT FROM INTERVAL '2 days'); -- error
diff --git a/src/test/regress/expected/timestamp.out b/src/test/regress/expected/timestamp.out
index 5f97505a30..1c3f0db1d1 100644
--- a/src/test/regress/expected/timestamp.out
+++ b/src/test/regress/expected/timestamp.out
@@ -615,146 +615,146 @@ SELECT d1 as "timestamp",
date_part( 'day', d1) AS day, date_part( 'hour', d1) AS hour,
date_part( 'minute', d1) AS minute, date_part( 'second', d1) AS second
FROM TIMESTAMP_TBL;
- timestamp | year | month | day | hour | minute | second
------------------------------+-----------+-------+-----+------+--------+--------
- -infinity | -Infinity | | | | |
- infinity | Infinity | | | | |
- Thu Jan 01 00:00:00 1970 | 1970 | 1 | 1 | 0 | 0 | 0
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:02 1997 | 1997 | 2 | 10 | 17 | 32 | 2
- Mon Feb 10 17:32:01.4 1997 | 1997 | 2 | 10 | 17 | 32 | 1.4
- Mon Feb 10 17:32:01.5 1997 | 1997 | 2 | 10 | 17 | 32 | 1.5
- Mon Feb 10 17:32:01.6 1997 | 1997 | 2 | 10 | 17 | 32 | 1.6
- Thu Jan 02 00:00:00 1997 | 1997 | 1 | 2 | 0 | 0 | 0
- Thu Jan 02 03:04:05 1997 | 1997 | 1 | 2 | 3 | 4 | 5
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Tue Jun 10 17:32:01 1997 | 1997 | 6 | 10 | 17 | 32 | 1
- Sat Sep 22 18:19:20 2001 | 2001 | 9 | 22 | 18 | 19 | 20
- Wed Mar 15 08:14:01 2000 | 2000 | 3 | 15 | 8 | 14 | 1
- Wed Mar 15 13:14:02 2000 | 2000 | 3 | 15 | 13 | 14 | 2
- Wed Mar 15 12:14:03 2000 | 2000 | 3 | 15 | 12 | 14 | 3
- Wed Mar 15 03:14:04 2000 | 2000 | 3 | 15 | 3 | 14 | 4
- Wed Mar 15 02:14:05 2000 | 2000 | 3 | 15 | 2 | 14 | 5
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:00 1997 | 1997 | 2 | 10 | 17 | 32 | 0
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Tue Jun 10 18:32:01 1997 | 1997 | 6 | 10 | 18 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Tue Feb 11 17:32:01 1997 | 1997 | 2 | 11 | 17 | 32 | 1
- Wed Feb 12 17:32:01 1997 | 1997 | 2 | 12 | 17 | 32 | 1
- Thu Feb 13 17:32:01 1997 | 1997 | 2 | 13 | 17 | 32 | 1
- Fri Feb 14 17:32:01 1997 | 1997 | 2 | 14 | 17 | 32 | 1
- Sat Feb 15 17:32:01 1997 | 1997 | 2 | 15 | 17 | 32 | 1
- Sun Feb 16 17:32:01 1997 | 1997 | 2 | 16 | 17 | 32 | 1
- Tue Feb 16 17:32:01 0097 BC | -97 | 2 | 16 | 17 | 32 | 1
- Sat Feb 16 17:32:01 0097 | 97 | 2 | 16 | 17 | 32 | 1
- Thu Feb 16 17:32:01 0597 | 597 | 2 | 16 | 17 | 32 | 1
- Tue Feb 16 17:32:01 1097 | 1097 | 2 | 16 | 17 | 32 | 1
- Sat Feb 16 17:32:01 1697 | 1697 | 2 | 16 | 17 | 32 | 1
- Thu Feb 16 17:32:01 1797 | 1797 | 2 | 16 | 17 | 32 | 1
- Tue Feb 16 17:32:01 1897 | 1897 | 2 | 16 | 17 | 32 | 1
- Sun Feb 16 17:32:01 1997 | 1997 | 2 | 16 | 17 | 32 | 1
- Sat Feb 16 17:32:01 2097 | 2097 | 2 | 16 | 17 | 32 | 1
- Wed Feb 28 17:32:01 1996 | 1996 | 2 | 28 | 17 | 32 | 1
- Thu Feb 29 17:32:01 1996 | 1996 | 2 | 29 | 17 | 32 | 1
- Fri Mar 01 17:32:01 1996 | 1996 | 3 | 1 | 17 | 32 | 1
- Mon Dec 30 17:32:01 1996 | 1996 | 12 | 30 | 17 | 32 | 1
- Tue Dec 31 17:32:01 1996 | 1996 | 12 | 31 | 17 | 32 | 1
- Wed Jan 01 17:32:01 1997 | 1997 | 1 | 1 | 17 | 32 | 1
- Fri Feb 28 17:32:01 1997 | 1997 | 2 | 28 | 17 | 32 | 1
- Sat Mar 01 17:32:01 1997 | 1997 | 3 | 1 | 17 | 32 | 1
- Tue Dec 30 17:32:01 1997 | 1997 | 12 | 30 | 17 | 32 | 1
- Wed Dec 31 17:32:01 1997 | 1997 | 12 | 31 | 17 | 32 | 1
- Fri Dec 31 17:32:01 1999 | 1999 | 12 | 31 | 17 | 32 | 1
- Sat Jan 01 17:32:01 2000 | 2000 | 1 | 1 | 17 | 32 | 1
- Sun Dec 31 17:32:01 2000 | 2000 | 12 | 31 | 17 | 32 | 1
- Mon Jan 01 17:32:01 2001 | 2001 | 1 | 1 | 17 | 32 | 1
+ timestamp | year | month | day | hour | minute | second
+-----------------------------+-----------+-------+-----+------+--------+-----------
+ -infinity | -Infinity | | | | |
+ infinity | Infinity | | | | |
+ Thu Jan 01 00:00:00 1970 | 1970 | 1 | 1 | 0 | 0 | 0.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:02 1997 | 1997 | 2 | 10 | 17 | 32 | 2.000000
+ Mon Feb 10 17:32:01.4 1997 | 1997 | 2 | 10 | 17 | 32 | 1.400000
+ Mon Feb 10 17:32:01.5 1997 | 1997 | 2 | 10 | 17 | 32 | 1.500000
+ Mon Feb 10 17:32:01.6 1997 | 1997 | 2 | 10 | 17 | 32 | 1.600000
+ Thu Jan 02 00:00:00 1997 | 1997 | 1 | 2 | 0 | 0 | 0.000000
+ Thu Jan 02 03:04:05 1997 | 1997 | 1 | 2 | 3 | 4 | 5.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Tue Jun 10 17:32:01 1997 | 1997 | 6 | 10 | 17 | 32 | 1.000000
+ Sat Sep 22 18:19:20 2001 | 2001 | 9 | 22 | 18 | 19 | 20.000000
+ Wed Mar 15 08:14:01 2000 | 2000 | 3 | 15 | 8 | 14 | 1.000000
+ Wed Mar 15 13:14:02 2000 | 2000 | 3 | 15 | 13 | 14 | 2.000000
+ Wed Mar 15 12:14:03 2000 | 2000 | 3 | 15 | 12 | 14 | 3.000000
+ Wed Mar 15 03:14:04 2000 | 2000 | 3 | 15 | 3 | 14 | 4.000000
+ Wed Mar 15 02:14:05 2000 | 2000 | 3 | 15 | 2 | 14 | 5.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:00 1997 | 1997 | 2 | 10 | 17 | 32 | 0.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Tue Jun 10 18:32:01 1997 | 1997 | 6 | 10 | 18 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Tue Feb 11 17:32:01 1997 | 1997 | 2 | 11 | 17 | 32 | 1.000000
+ Wed Feb 12 17:32:01 1997 | 1997 | 2 | 12 | 17 | 32 | 1.000000
+ Thu Feb 13 17:32:01 1997 | 1997 | 2 | 13 | 17 | 32 | 1.000000
+ Fri Feb 14 17:32:01 1997 | 1997 | 2 | 14 | 17 | 32 | 1.000000
+ Sat Feb 15 17:32:01 1997 | 1997 | 2 | 15 | 17 | 32 | 1.000000
+ Sun Feb 16 17:32:01 1997 | 1997 | 2 | 16 | 17 | 32 | 1.000000
+ Tue Feb 16 17:32:01 0097 BC | -97 | 2 | 16 | 17 | 32 | 1.000000
+ Sat Feb 16 17:32:01 0097 | 97 | 2 | 16 | 17 | 32 | 1.000000
+ Thu Feb 16 17:32:01 0597 | 597 | 2 | 16 | 17 | 32 | 1.000000
+ Tue Feb 16 17:32:01 1097 | 1097 | 2 | 16 | 17 | 32 | 1.000000
+ Sat Feb 16 17:32:01 1697 | 1697 | 2 | 16 | 17 | 32 | 1.000000
+ Thu Feb 16 17:32:01 1797 | 1797 | 2 | 16 | 17 | 32 | 1.000000
+ Tue Feb 16 17:32:01 1897 | 1897 | 2 | 16 | 17 | 32 | 1.000000
+ Sun Feb 16 17:32:01 1997 | 1997 | 2 | 16 | 17 | 32 | 1.000000
+ Sat Feb 16 17:32:01 2097 | 2097 | 2 | 16 | 17 | 32 | 1.000000
+ Wed Feb 28 17:32:01 1996 | 1996 | 2 | 28 | 17 | 32 | 1.000000
+ Thu Feb 29 17:32:01 1996 | 1996 | 2 | 29 | 17 | 32 | 1.000000
+ Fri Mar 01 17:32:01 1996 | 1996 | 3 | 1 | 17 | 32 | 1.000000
+ Mon Dec 30 17:32:01 1996 | 1996 | 12 | 30 | 17 | 32 | 1.000000
+ Tue Dec 31 17:32:01 1996 | 1996 | 12 | 31 | 17 | 32 | 1.000000
+ Wed Jan 01 17:32:01 1997 | 1997 | 1 | 1 | 17 | 32 | 1.000000
+ Fri Feb 28 17:32:01 1997 | 1997 | 2 | 28 | 17 | 32 | 1.000000
+ Sat Mar 01 17:32:01 1997 | 1997 | 3 | 1 | 17 | 32 | 1.000000
+ Tue Dec 30 17:32:01 1997 | 1997 | 12 | 30 | 17 | 32 | 1.000000
+ Wed Dec 31 17:32:01 1997 | 1997 | 12 | 31 | 17 | 32 | 1.000000
+ Fri Dec 31 17:32:01 1999 | 1999 | 12 | 31 | 17 | 32 | 1.000000
+ Sat Jan 01 17:32:01 2000 | 2000 | 1 | 1 | 17 | 32 | 1.000000
+ Sun Dec 31 17:32:01 2000 | 2000 | 12 | 31 | 17 | 32 | 1.000000
+ Mon Jan 01 17:32:01 2001 | 2001 | 1 | 1 | 17 | 32 | 1.000000
(65 rows)
SELECT d1 as "timestamp",
date_part( 'quarter', d1) AS quarter, date_part( 'msec', d1) AS msec,
date_part( 'usec', d1) AS usec
FROM TIMESTAMP_TBL;
- timestamp | quarter | msec | usec
------------------------------+---------+-------+----------
- -infinity | | |
- infinity | | |
- Thu Jan 01 00:00:00 1970 | 1 | 0 | 0
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:02 1997 | 1 | 2000 | 2000000
- Mon Feb 10 17:32:01.4 1997 | 1 | 1400 | 1400000
- Mon Feb 10 17:32:01.5 1997 | 1 | 1500 | 1500000
- Mon Feb 10 17:32:01.6 1997 | 1 | 1600 | 1600000
- Thu Jan 02 00:00:00 1997 | 1 | 0 | 0
- Thu Jan 02 03:04:05 1997 | 1 | 5000 | 5000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Tue Jun 10 17:32:01 1997 | 2 | 1000 | 1000000
- Sat Sep 22 18:19:20 2001 | 3 | 20000 | 20000000
- Wed Mar 15 08:14:01 2000 | 1 | 1000 | 1000000
- Wed Mar 15 13:14:02 2000 | 1 | 2000 | 2000000
- Wed Mar 15 12:14:03 2000 | 1 | 3000 | 3000000
- Wed Mar 15 03:14:04 2000 | 1 | 4000 | 4000000
- Wed Mar 15 02:14:05 2000 | 1 | 5000 | 5000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:00 1997 | 1 | 0 | 0
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Tue Jun 10 18:32:01 1997 | 2 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Tue Feb 11 17:32:01 1997 | 1 | 1000 | 1000000
- Wed Feb 12 17:32:01 1997 | 1 | 1000 | 1000000
- Thu Feb 13 17:32:01 1997 | 1 | 1000 | 1000000
- Fri Feb 14 17:32:01 1997 | 1 | 1000 | 1000000
- Sat Feb 15 17:32:01 1997 | 1 | 1000 | 1000000
- Sun Feb 16 17:32:01 1997 | 1 | 1000 | 1000000
- Tue Feb 16 17:32:01 0097 BC | 1 | 1000 | 1000000
- Sat Feb 16 17:32:01 0097 | 1 | 1000 | 1000000
- Thu Feb 16 17:32:01 0597 | 1 | 1000 | 1000000
- Tue Feb 16 17:32:01 1097 | 1 | 1000 | 1000000
- Sat Feb 16 17:32:01 1697 | 1 | 1000 | 1000000
- Thu Feb 16 17:32:01 1797 | 1 | 1000 | 1000000
- Tue Feb 16 17:32:01 1897 | 1 | 1000 | 1000000
- Sun Feb 16 17:32:01 1997 | 1 | 1000 | 1000000
- Sat Feb 16 17:32:01 2097 | 1 | 1000 | 1000000
- Wed Feb 28 17:32:01 1996 | 1 | 1000 | 1000000
- Thu Feb 29 17:32:01 1996 | 1 | 1000 | 1000000
- Fri Mar 01 17:32:01 1996 | 1 | 1000 | 1000000
- Mon Dec 30 17:32:01 1996 | 4 | 1000 | 1000000
- Tue Dec 31 17:32:01 1996 | 4 | 1000 | 1000000
- Wed Jan 01 17:32:01 1997 | 1 | 1000 | 1000000
- Fri Feb 28 17:32:01 1997 | 1 | 1000 | 1000000
- Sat Mar 01 17:32:01 1997 | 1 | 1000 | 1000000
- Tue Dec 30 17:32:01 1997 | 4 | 1000 | 1000000
- Wed Dec 31 17:32:01 1997 | 4 | 1000 | 1000000
- Fri Dec 31 17:32:01 1999 | 4 | 1000 | 1000000
- Sat Jan 01 17:32:01 2000 | 1 | 1000 | 1000000
- Sun Dec 31 17:32:01 2000 | 4 | 1000 | 1000000
- Mon Jan 01 17:32:01 2001 | 1 | 1000 | 1000000
+ timestamp | quarter | msec | usec
+-----------------------------+---------+-----------+----------
+ -infinity | | |
+ infinity | | |
+ Thu Jan 01 00:00:00 1970 | 1 | 0.000 | 0
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:02 1997 | 1 | 2000.000 | 2000000
+ Mon Feb 10 17:32:01.4 1997 | 1 | 1400.000 | 1400000
+ Mon Feb 10 17:32:01.5 1997 | 1 | 1500.000 | 1500000
+ Mon Feb 10 17:32:01.6 1997 | 1 | 1600.000 | 1600000
+ Thu Jan 02 00:00:00 1997 | 1 | 0.000 | 0
+ Thu Jan 02 03:04:05 1997 | 1 | 5000.000 | 5000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Tue Jun 10 17:32:01 1997 | 2 | 1000.000 | 1000000
+ Sat Sep 22 18:19:20 2001 | 3 | 20000.000 | 20000000
+ Wed Mar 15 08:14:01 2000 | 1 | 1000.000 | 1000000
+ Wed Mar 15 13:14:02 2000 | 1 | 2000.000 | 2000000
+ Wed Mar 15 12:14:03 2000 | 1 | 3000.000 | 3000000
+ Wed Mar 15 03:14:04 2000 | 1 | 4000.000 | 4000000
+ Wed Mar 15 02:14:05 2000 | 1 | 5000.000 | 5000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:00 1997 | 1 | 0.000 | 0
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Tue Jun 10 18:32:01 1997 | 2 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Tue Feb 11 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Wed Feb 12 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Thu Feb 13 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Fri Feb 14 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Sat Feb 15 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Sun Feb 16 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Tue Feb 16 17:32:01 0097 BC | 1 | 1000.000 | 1000000
+ Sat Feb 16 17:32:01 0097 | 1 | 1000.000 | 1000000
+ Thu Feb 16 17:32:01 0597 | 1 | 1000.000 | 1000000
+ Tue Feb 16 17:32:01 1097 | 1 | 1000.000 | 1000000
+ Sat Feb 16 17:32:01 1697 | 1 | 1000.000 | 1000000
+ Thu Feb 16 17:32:01 1797 | 1 | 1000.000 | 1000000
+ Tue Feb 16 17:32:01 1897 | 1 | 1000.000 | 1000000
+ Sun Feb 16 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Sat Feb 16 17:32:01 2097 | 1 | 1000.000 | 1000000
+ Wed Feb 28 17:32:01 1996 | 1 | 1000.000 | 1000000
+ Thu Feb 29 17:32:01 1996 | 1 | 1000.000 | 1000000
+ Fri Mar 01 17:32:01 1996 | 1 | 1000.000 | 1000000
+ Mon Dec 30 17:32:01 1996 | 4 | 1000.000 | 1000000
+ Tue Dec 31 17:32:01 1996 | 4 | 1000.000 | 1000000
+ Wed Jan 01 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Fri Feb 28 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Sat Mar 01 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Tue Dec 30 17:32:01 1997 | 4 | 1000.000 | 1000000
+ Wed Dec 31 17:32:01 1997 | 4 | 1000.000 | 1000000
+ Fri Dec 31 17:32:01 1999 | 4 | 1000.000 | 1000000
+ Sat Jan 01 17:32:01 2000 | 1 | 1000.000 | 1000000
+ Sun Dec 31 17:32:01 2000 | 4 | 1000.000 | 1000000
+ Mon Jan 01 17:32:01 2001 | 1 | 1000.000 | 1000000
(65 rows)
SELECT d1 as "timestamp",
diff --git a/src/test/regress/expected/timestamptz.out b/src/test/regress/expected/timestamptz.out
index 639b50308e..2420d433de 100644
--- a/src/test/regress/expected/timestamptz.out
+++ b/src/test/regress/expected/timestamptz.out
@@ -733,148 +733,148 @@ SELECT d1 as timestamptz,
date_part( 'day', d1) AS day, date_part( 'hour', d1) AS hour,
date_part( 'minute', d1) AS minute, date_part( 'second', d1) AS second
FROM TIMESTAMPTZ_TBL;
- timestamptz | year | month | day | hour | minute | second
----------------------------------+-----------+-------+-----+------+--------+--------
- -infinity | -Infinity | | | | |
- infinity | Infinity | | | | |
- Wed Dec 31 16:00:00 1969 PST | 1969 | 12 | 31 | 16 | 0 | 0
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:02 1997 PST | 1997 | 2 | 10 | 17 | 32 | 2
- Mon Feb 10 17:32:01.4 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.4
- Mon Feb 10 17:32:01.5 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.5
- Mon Feb 10 17:32:01.6 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.6
- Thu Jan 02 00:00:00 1997 PST | 1997 | 1 | 2 | 0 | 0 | 0
- Thu Jan 02 03:04:05 1997 PST | 1997 | 1 | 2 | 3 | 4 | 5
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Tue Jun 10 17:32:01 1997 PDT | 1997 | 6 | 10 | 17 | 32 | 1
- Sat Sep 22 18:19:20 2001 PDT | 2001 | 9 | 22 | 18 | 19 | 20
- Wed Mar 15 08:14:01 2000 PST | 2000 | 3 | 15 | 8 | 14 | 1
- Wed Mar 15 04:14:02 2000 PST | 2000 | 3 | 15 | 4 | 14 | 2
- Wed Mar 15 02:14:03 2000 PST | 2000 | 3 | 15 | 2 | 14 | 3
- Wed Mar 15 03:14:04 2000 PST | 2000 | 3 | 15 | 3 | 14 | 4
- Wed Mar 15 01:14:05 2000 PST | 2000 | 3 | 15 | 1 | 14 | 5
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:00 1997 PST | 1997 | 2 | 10 | 17 | 32 | 0
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 09:32:01 1997 PST | 1997 | 2 | 10 | 9 | 32 | 1
- Mon Feb 10 09:32:01 1997 PST | 1997 | 2 | 10 | 9 | 32 | 1
- Mon Feb 10 09:32:01 1997 PST | 1997 | 2 | 10 | 9 | 32 | 1
- Mon Feb 10 14:32:01 1997 PST | 1997 | 2 | 10 | 14 | 32 | 1
- Thu Jul 10 14:32:01 1997 PDT | 1997 | 7 | 10 | 14 | 32 | 1
- Tue Jun 10 18:32:01 1997 PDT | 1997 | 6 | 10 | 18 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Tue Feb 11 17:32:01 1997 PST | 1997 | 2 | 11 | 17 | 32 | 1
- Wed Feb 12 17:32:01 1997 PST | 1997 | 2 | 12 | 17 | 32 | 1
- Thu Feb 13 17:32:01 1997 PST | 1997 | 2 | 13 | 17 | 32 | 1
- Fri Feb 14 17:32:01 1997 PST | 1997 | 2 | 14 | 17 | 32 | 1
- Sat Feb 15 17:32:01 1997 PST | 1997 | 2 | 15 | 17 | 32 | 1
- Sun Feb 16 17:32:01 1997 PST | 1997 | 2 | 16 | 17 | 32 | 1
- Tue Feb 16 17:32:01 0097 PST BC | -97 | 2 | 16 | 17 | 32 | 1
- Sat Feb 16 17:32:01 0097 PST | 97 | 2 | 16 | 17 | 32 | 1
- Thu Feb 16 17:32:01 0597 PST | 597 | 2 | 16 | 17 | 32 | 1
- Tue Feb 16 17:32:01 1097 PST | 1097 | 2 | 16 | 17 | 32 | 1
- Sat Feb 16 17:32:01 1697 PST | 1697 | 2 | 16 | 17 | 32 | 1
- Thu Feb 16 17:32:01 1797 PST | 1797 | 2 | 16 | 17 | 32 | 1
- Tue Feb 16 17:32:01 1897 PST | 1897 | 2 | 16 | 17 | 32 | 1
- Sun Feb 16 17:32:01 1997 PST | 1997 | 2 | 16 | 17 | 32 | 1
- Sat Feb 16 17:32:01 2097 PST | 2097 | 2 | 16 | 17 | 32 | 1
- Wed Feb 28 17:32:01 1996 PST | 1996 | 2 | 28 | 17 | 32 | 1
- Thu Feb 29 17:32:01 1996 PST | 1996 | 2 | 29 | 17 | 32 | 1
- Fri Mar 01 17:32:01 1996 PST | 1996 | 3 | 1 | 17 | 32 | 1
- Mon Dec 30 17:32:01 1996 PST | 1996 | 12 | 30 | 17 | 32 | 1
- Tue Dec 31 17:32:01 1996 PST | 1996 | 12 | 31 | 17 | 32 | 1
- Wed Jan 01 17:32:01 1997 PST | 1997 | 1 | 1 | 17 | 32 | 1
- Fri Feb 28 17:32:01 1997 PST | 1997 | 2 | 28 | 17 | 32 | 1
- Sat Mar 01 17:32:01 1997 PST | 1997 | 3 | 1 | 17 | 32 | 1
- Tue Dec 30 17:32:01 1997 PST | 1997 | 12 | 30 | 17 | 32 | 1
- Wed Dec 31 17:32:01 1997 PST | 1997 | 12 | 31 | 17 | 32 | 1
- Fri Dec 31 17:32:01 1999 PST | 1999 | 12 | 31 | 17 | 32 | 1
- Sat Jan 01 17:32:01 2000 PST | 2000 | 1 | 1 | 17 | 32 | 1
- Sun Dec 31 17:32:01 2000 PST | 2000 | 12 | 31 | 17 | 32 | 1
- Mon Jan 01 17:32:01 2001 PST | 2001 | 1 | 1 | 17 | 32 | 1
+ timestamptz | year | month | day | hour | minute | second
+---------------------------------+-----------+-------+-----+------+--------+-----------
+ -infinity | -Infinity | | | | |
+ infinity | Infinity | | | | |
+ Wed Dec 31 16:00:00 1969 PST | 1969 | 12 | 31 | 16 | 0 | 0.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:02 1997 PST | 1997 | 2 | 10 | 17 | 32 | 2.000000
+ Mon Feb 10 17:32:01.4 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.400000
+ Mon Feb 10 17:32:01.5 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.500000
+ Mon Feb 10 17:32:01.6 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.600000
+ Thu Jan 02 00:00:00 1997 PST | 1997 | 1 | 2 | 0 | 0 | 0.000000
+ Thu Jan 02 03:04:05 1997 PST | 1997 | 1 | 2 | 3 | 4 | 5.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Tue Jun 10 17:32:01 1997 PDT | 1997 | 6 | 10 | 17 | 32 | 1.000000
+ Sat Sep 22 18:19:20 2001 PDT | 2001 | 9 | 22 | 18 | 19 | 20.000000
+ Wed Mar 15 08:14:01 2000 PST | 2000 | 3 | 15 | 8 | 14 | 1.000000
+ Wed Mar 15 04:14:02 2000 PST | 2000 | 3 | 15 | 4 | 14 | 2.000000
+ Wed Mar 15 02:14:03 2000 PST | 2000 | 3 | 15 | 2 | 14 | 3.000000
+ Wed Mar 15 03:14:04 2000 PST | 2000 | 3 | 15 | 3 | 14 | 4.000000
+ Wed Mar 15 01:14:05 2000 PST | 2000 | 3 | 15 | 1 | 14 | 5.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:00 1997 PST | 1997 | 2 | 10 | 17 | 32 | 0.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 09:32:01 1997 PST | 1997 | 2 | 10 | 9 | 32 | 1.000000
+ Mon Feb 10 09:32:01 1997 PST | 1997 | 2 | 10 | 9 | 32 | 1.000000
+ Mon Feb 10 09:32:01 1997 PST | 1997 | 2 | 10 | 9 | 32 | 1.000000
+ Mon Feb 10 14:32:01 1997 PST | 1997 | 2 | 10 | 14 | 32 | 1.000000
+ Thu Jul 10 14:32:01 1997 PDT | 1997 | 7 | 10 | 14 | 32 | 1.000000
+ Tue Jun 10 18:32:01 1997 PDT | 1997 | 6 | 10 | 18 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Tue Feb 11 17:32:01 1997 PST | 1997 | 2 | 11 | 17 | 32 | 1.000000
+ Wed Feb 12 17:32:01 1997 PST | 1997 | 2 | 12 | 17 | 32 | 1.000000
+ Thu Feb 13 17:32:01 1997 PST | 1997 | 2 | 13 | 17 | 32 | 1.000000
+ Fri Feb 14 17:32:01 1997 PST | 1997 | 2 | 14 | 17 | 32 | 1.000000
+ Sat Feb 15 17:32:01 1997 PST | 1997 | 2 | 15 | 17 | 32 | 1.000000
+ Sun Feb 16 17:32:01 1997 PST | 1997 | 2 | 16 | 17 | 32 | 1.000000
+ Tue Feb 16 17:32:01 0097 PST BC | -97 | 2 | 16 | 17 | 32 | 1.000000
+ Sat Feb 16 17:32:01 0097 PST | 97 | 2 | 16 | 17 | 32 | 1.000000
+ Thu Feb 16 17:32:01 0597 PST | 597 | 2 | 16 | 17 | 32 | 1.000000
+ Tue Feb 16 17:32:01 1097 PST | 1097 | 2 | 16 | 17 | 32 | 1.000000
+ Sat Feb 16 17:32:01 1697 PST | 1697 | 2 | 16 | 17 | 32 | 1.000000
+ Thu Feb 16 17:32:01 1797 PST | 1797 | 2 | 16 | 17 | 32 | 1.000000
+ Tue Feb 16 17:32:01 1897 PST | 1897 | 2 | 16 | 17 | 32 | 1.000000
+ Sun Feb 16 17:32:01 1997 PST | 1997 | 2 | 16 | 17 | 32 | 1.000000
+ Sat Feb 16 17:32:01 2097 PST | 2097 | 2 | 16 | 17 | 32 | 1.000000
+ Wed Feb 28 17:32:01 1996 PST | 1996 | 2 | 28 | 17 | 32 | 1.000000
+ Thu Feb 29 17:32:01 1996 PST | 1996 | 2 | 29 | 17 | 32 | 1.000000
+ Fri Mar 01 17:32:01 1996 PST | 1996 | 3 | 1 | 17 | 32 | 1.000000
+ Mon Dec 30 17:32:01 1996 PST | 1996 | 12 | 30 | 17 | 32 | 1.000000
+ Tue Dec 31 17:32:01 1996 PST | 1996 | 12 | 31 | 17 | 32 | 1.000000
+ Wed Jan 01 17:32:01 1997 PST | 1997 | 1 | 1 | 17 | 32 | 1.000000
+ Fri Feb 28 17:32:01 1997 PST | 1997 | 2 | 28 | 17 | 32 | 1.000000
+ Sat Mar 01 17:32:01 1997 PST | 1997 | 3 | 1 | 17 | 32 | 1.000000
+ Tue Dec 30 17:32:01 1997 PST | 1997 | 12 | 30 | 17 | 32 | 1.000000
+ Wed Dec 31 17:32:01 1997 PST | 1997 | 12 | 31 | 17 | 32 | 1.000000
+ Fri Dec 31 17:32:01 1999 PST | 1999 | 12 | 31 | 17 | 32 | 1.000000
+ Sat Jan 01 17:32:01 2000 PST | 2000 | 1 | 1 | 17 | 32 | 1.000000
+ Sun Dec 31 17:32:01 2000 PST | 2000 | 12 | 31 | 17 | 32 | 1.000000
+ Mon Jan 01 17:32:01 2001 PST | 2001 | 1 | 1 | 17 | 32 | 1.000000
(66 rows)
SELECT d1 as timestamptz,
date_part( 'quarter', d1) AS quarter, date_part( 'msec', d1) AS msec,
date_part( 'usec', d1) AS usec
FROM TIMESTAMPTZ_TBL;
- timestamptz | quarter | msec | usec
----------------------------------+---------+-------+----------
- -infinity | | |
- infinity | | |
- Wed Dec 31 16:00:00 1969 PST | 4 | 0 | 0
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:02 1997 PST | 1 | 2000 | 2000000
- Mon Feb 10 17:32:01.4 1997 PST | 1 | 1400 | 1400000
- Mon Feb 10 17:32:01.5 1997 PST | 1 | 1500 | 1500000
- Mon Feb 10 17:32:01.6 1997 PST | 1 | 1600 | 1600000
- Thu Jan 02 00:00:00 1997 PST | 1 | 0 | 0
- Thu Jan 02 03:04:05 1997 PST | 1 | 5000 | 5000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Tue Jun 10 17:32:01 1997 PDT | 2 | 1000 | 1000000
- Sat Sep 22 18:19:20 2001 PDT | 3 | 20000 | 20000000
- Wed Mar 15 08:14:01 2000 PST | 1 | 1000 | 1000000
- Wed Mar 15 04:14:02 2000 PST | 1 | 2000 | 2000000
- Wed Mar 15 02:14:03 2000 PST | 1 | 3000 | 3000000
- Wed Mar 15 03:14:04 2000 PST | 1 | 4000 | 4000000
- Wed Mar 15 01:14:05 2000 PST | 1 | 5000 | 5000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:00 1997 PST | 1 | 0 | 0
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 09:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 09:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 09:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 14:32:01 1997 PST | 1 | 1000 | 1000000
- Thu Jul 10 14:32:01 1997 PDT | 3 | 1000 | 1000000
- Tue Jun 10 18:32:01 1997 PDT | 2 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Tue Feb 11 17:32:01 1997 PST | 1 | 1000 | 1000000
- Wed Feb 12 17:32:01 1997 PST | 1 | 1000 | 1000000
- Thu Feb 13 17:32:01 1997 PST | 1 | 1000 | 1000000
- Fri Feb 14 17:32:01 1997 PST | 1 | 1000 | 1000000
- Sat Feb 15 17:32:01 1997 PST | 1 | 1000 | 1000000
- Sun Feb 16 17:32:01 1997 PST | 1 | 1000 | 1000000
- Tue Feb 16 17:32:01 0097 PST BC | 1 | 1000 | 1000000
- Sat Feb 16 17:32:01 0097 PST | 1 | 1000 | 1000000
- Thu Feb 16 17:32:01 0597 PST | 1 | 1000 | 1000000
- Tue Feb 16 17:32:01 1097 PST | 1 | 1000 | 1000000
- Sat Feb 16 17:32:01 1697 PST | 1 | 1000 | 1000000
- Thu Feb 16 17:32:01 1797 PST | 1 | 1000 | 1000000
- Tue Feb 16 17:32:01 1897 PST | 1 | 1000 | 1000000
- Sun Feb 16 17:32:01 1997 PST | 1 | 1000 | 1000000
- Sat Feb 16 17:32:01 2097 PST | 1 | 1000 | 1000000
- Wed Feb 28 17:32:01 1996 PST | 1 | 1000 | 1000000
- Thu Feb 29 17:32:01 1996 PST | 1 | 1000 | 1000000
- Fri Mar 01 17:32:01 1996 PST | 1 | 1000 | 1000000
- Mon Dec 30 17:32:01 1996 PST | 4 | 1000 | 1000000
- Tue Dec 31 17:32:01 1996 PST | 4 | 1000 | 1000000
- Wed Jan 01 17:32:01 1997 PST | 1 | 1000 | 1000000
- Fri Feb 28 17:32:01 1997 PST | 1 | 1000 | 1000000
- Sat Mar 01 17:32:01 1997 PST | 1 | 1000 | 1000000
- Tue Dec 30 17:32:01 1997 PST | 4 | 1000 | 1000000
- Wed Dec 31 17:32:01 1997 PST | 4 | 1000 | 1000000
- Fri Dec 31 17:32:01 1999 PST | 4 | 1000 | 1000000
- Sat Jan 01 17:32:01 2000 PST | 1 | 1000 | 1000000
- Sun Dec 31 17:32:01 2000 PST | 4 | 1000 | 1000000
- Mon Jan 01 17:32:01 2001 PST | 1 | 1000 | 1000000
+ timestamptz | quarter | msec | usec
+---------------------------------+---------+-----------+----------
+ -infinity | | |
+ infinity | | |
+ Wed Dec 31 16:00:00 1969 PST | 4 | 0.000 | 0
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:02 1997 PST | 1 | 2000.000 | 2000000
+ Mon Feb 10 17:32:01.4 1997 PST | 1 | 1400.000 | 1400000
+ Mon Feb 10 17:32:01.5 1997 PST | 1 | 1500.000 | 1500000
+ Mon Feb 10 17:32:01.6 1997 PST | 1 | 1600.000 | 1600000
+ Thu Jan 02 00:00:00 1997 PST | 1 | 0.000 | 0
+ Thu Jan 02 03:04:05 1997 PST | 1 | 5000.000 | 5000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Tue Jun 10 17:32:01 1997 PDT | 2 | 1000.000 | 1000000
+ Sat Sep 22 18:19:20 2001 PDT | 3 | 20000.000 | 20000000
+ Wed Mar 15 08:14:01 2000 PST | 1 | 1000.000 | 1000000
+ Wed Mar 15 04:14:02 2000 PST | 1 | 2000.000 | 2000000
+ Wed Mar 15 02:14:03 2000 PST | 1 | 3000.000 | 3000000
+ Wed Mar 15 03:14:04 2000 PST | 1 | 4000.000 | 4000000
+ Wed Mar 15 01:14:05 2000 PST | 1 | 5000.000 | 5000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:00 1997 PST | 1 | 0.000 | 0
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 09:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 09:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 09:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 14:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Thu Jul 10 14:32:01 1997 PDT | 3 | 1000.000 | 1000000
+ Tue Jun 10 18:32:01 1997 PDT | 2 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Tue Feb 11 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Wed Feb 12 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Thu Feb 13 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Fri Feb 14 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Sat Feb 15 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Sun Feb 16 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Tue Feb 16 17:32:01 0097 PST BC | 1 | 1000.000 | 1000000
+ Sat Feb 16 17:32:01 0097 PST | 1 | 1000.000 | 1000000
+ Thu Feb 16 17:32:01 0597 PST | 1 | 1000.000 | 1000000
+ Tue Feb 16 17:32:01 1097 PST | 1 | 1000.000 | 1000000
+ Sat Feb 16 17:32:01 1697 PST | 1 | 1000.000 | 1000000
+ Thu Feb 16 17:32:01 1797 PST | 1 | 1000.000 | 1000000
+ Tue Feb 16 17:32:01 1897 PST | 1 | 1000.000 | 1000000
+ Sun Feb 16 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Sat Feb 16 17:32:01 2097 PST | 1 | 1000.000 | 1000000
+ Wed Feb 28 17:32:01 1996 PST | 1 | 1000.000 | 1000000
+ Thu Feb 29 17:32:01 1996 PST | 1 | 1000.000 | 1000000
+ Fri Mar 01 17:32:01 1996 PST | 1 | 1000.000 | 1000000
+ Mon Dec 30 17:32:01 1996 PST | 4 | 1000.000 | 1000000
+ Tue Dec 31 17:32:01 1996 PST | 4 | 1000.000 | 1000000
+ Wed Jan 01 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Fri Feb 28 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Sat Mar 01 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Tue Dec 30 17:32:01 1997 PST | 4 | 1000.000 | 1000000
+ Wed Dec 31 17:32:01 1997 PST | 4 | 1000.000 | 1000000
+ Fri Dec 31 17:32:01 1999 PST | 4 | 1000.000 | 1000000
+ Sat Jan 01 17:32:01 2000 PST | 1 | 1000.000 | 1000000
+ Sun Dec 31 17:32:01 2000 PST | 4 | 1000.000 | 1000000
+ Mon Jan 01 17:32:01 2001 PST | 1 | 1000.000 | 1000000
(66 rows)
SELECT d1 as timestamptz,
--
2.28.0
út 4. 8. 2020 v 16:08 odesílatel Peter Eisentraut <
peter.eisentraut@2ndquadrant.com> napsal:
On 2020-05-25 15:28, Peter Eisentraut wrote:
On 2019-12-02 23:52, Thomas Munro wrote:
I'm not an expert in floating point math but hopefully it means that no
type change is required - double precision can handle it.Me neither, but the SQL standard requires us to use an exact numeric
type, so it's wrong on that level by definition.I looked into this (changing the return types of date_part()/extract()
from float8 to numeric).One problem (other than perhaps performance, tbd.) is that this would no
longer allow processing infinite timestamps, since numeric does not
support infinity. It could be argued that running extract() on infinite
timestamps isn't very useful, but it's something to consider explicitly.Now that numeric supports infinity, here is a patch that changes the
return types of date_part() to numeric. It's not meant to be a final
version, but it is useful for discussing a few things.The internal implementation could be made a bit more elegant if we had
variants of int4_numeric() and int8_numeric() that don't have to go
through fmgr. This would also help in other areas of the code. There
are probably also other ways in which the internals could be made more
compact; I just converted them fairly directly.When extracting seconds or microseconds, I made it always produce 6 or 3
decimal places, even if they are zero. I don't know if we want that or
what behavior we want. That's what all the changes in the regression
tests are about. Everything else passes unchanged.The 'julian' field is a bit of a mystery. First of all it's not
documented. The regression tests only test the rounded output, perhaps
to avoid floating point differences. When you do date_part('julian',
date), then you get a correct Julian Day. But date_part('julian',
timestamp[tz]) gives incorrect Julian Date values that are off by 12
hours. My patch doesn't change that, I just noticed when I took away
the round() call in the regression tests. Those calls now produce a
different number of decimal places.It might make sense to make date_part(..., date) a separate C function
instead of an SQL wrapper around date_part(..., timestamp). That could
return integer and could reject nonsensical fields such as "minute".
Then we could also make a less contorted implementation of
date_part('julian', date) that matches to_char(date, 'J') and remove the
incorrect implementation of date_part('julian', timestamp).
I like a idea to have d date variant of date_part
Pavel
Show quoted text
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Here is a new patch series version.
I have created a new internal function for converting integers to
numeric, to make the implementation a bit more elegant and compact.
I have also created a new date_part(..., date) in C, and added more test
coverage for that.
Other than some of the semantic issues mentioned in the previous
message, this version looks pretty good to me in principle.
I have done some performance tests to assess the impact of changing from
float to numeric. I did tests like this:
create table t1 (a int, b timestamp with time zone);
insert into t1 select generate_series(1, 10000000), current_timestamp +
random() * interval '1000 days';
select extract(dow from b) from t1 \g /dev/null
select extract(epoch from b) from t1 \g /dev/null
There appears to be about a 20% increase in run time for these tests.
These are obviously extreme tests, so I think that would be okay. More
tests and testing ideas are welcome.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachments:
v2-0001-Add-more-tests-for-extract-of-date-type.patchtext/plain; charset=UTF-8; name=v2-0001-Add-more-tests-for-extract-of-date-type.patch; x-mac-creator=0; x-mac-type=0Download
From 2c954becf6224976c8983f92b4c58a7462a27ddf Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter@eisentraut.org>
Date: Wed, 12 Aug 2020 08:46:21 +0200
Subject: [PATCH v2 1/3] Add more tests for extract() of date type
---
src/test/regress/expected/date.out | 192 ++++++++++++++++++++--
src/test/regress/expected/expressions.out | 4 +-
src/test/regress/sql/date.sql | 49 +++++-
3 files changed, 228 insertions(+), 17 deletions(-)
diff --git a/src/test/regress/expected/date.out b/src/test/regress/expected/date.out
index 4cdf1635f2..d035fe1f1e 100644
--- a/src/test/regress/expected/date.out
+++ b/src/test/regress/expected/date.out
@@ -21,9 +21,10 @@ INSERT INTO DATE_TBL VALUES ('2000-04-03');
INSERT INTO DATE_TBL VALUES ('2038-04-08');
INSERT INTO DATE_TBL VALUES ('2039-04-09');
INSERT INTO DATE_TBL VALUES ('2040-04-10');
-SELECT f1 AS "Fifteen" FROM DATE_TBL;
- Fifteen
-------------
+INSERT INTO DATE_TBL VALUES ('2040-04-10 BC');
+SELECT f1 FROM DATE_TBL;
+ f1
+---------------
04-09-1957
06-13-1957
02-28-1996
@@ -39,11 +40,12 @@ SELECT f1 AS "Fifteen" FROM DATE_TBL;
04-08-2038
04-09-2039
04-10-2040
-(15 rows)
+ 04-10-2040 BC
+(16 rows)
-SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
- Nine
-------------
+SELECT f1 FROM DATE_TBL WHERE f1 < '2000-01-01';
+ f1
+---------------
04-09-1957
06-13-1957
02-28-1996
@@ -53,11 +55,12 @@ SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
02-28-1997
03-01-1997
03-02-1997
-(9 rows)
+ 04-10-2040 BC
+(10 rows)
-SELECT f1 AS "Three" FROM DATE_TBL
+SELECT f1 FROM DATE_TBL
WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
- Three
+ f1
------------
04-01-2000
04-02-2000
@@ -860,7 +863,8 @@ SELECT f1 - date '2000-01-01' AS "Days From 2K" FROM DATE_TBL;
13977
14343
14710
-(15 rows)
+ -1475115
+(16 rows)
SELECT f1 - date 'epoch' AS "Days From Epoch" FROM DATE_TBL;
Days From Epoch
@@ -880,7 +884,8 @@ SELECT f1 - date 'epoch' AS "Days From Epoch" FROM DATE_TBL;
24934
25300
25667
-(15 rows)
+ -1464158
+(16 rows)
SELECT date 'yesterday' - date 'today' AS "One day";
One day
@@ -920,6 +925,43 @@ SELECT date 'tomorrow' - date 'yesterday' AS "Two days";
--
-- test extract!
+--
+SELECT f1 as "date",
+ date_part('year', f1) AS year,
+ date_part('month', f1) AS month,
+ date_part('day', f1) AS day,
+ date_part('quarter', f1) AS quarter,
+ date_part('decade', f1) AS decade,
+ date_part('century', f1) AS century,
+ date_part('millennium', f1) AS millennium,
+ date_part('isoyear', f1) AS isoyear,
+ date_part('week', f1) AS week,
+ date_part('dow', f1) AS dow,
+ date_part('isodow', f1) AS isodow,
+ date_part('doy', f1) AS doy,
+ date_part('julian', f1) AS julian,
+ date_part('epoch', f1) AS epoch
+ FROM date_tbl;
+ date | year | month | day | quarter | decade | century | millennium | isoyear | week | dow | isodow | doy | julian | epoch
+---------------+-------+-------+-----+---------+--------+---------+------------+---------+------+-----+--------+-----+---------+---------------
+ 04-09-1957 | 1957 | 4 | 9 | 2 | 195 | 20 | 2 | 1957 | 15 | 2 | 2 | 99 | 2435938 | -401760000
+ 06-13-1957 | 1957 | 6 | 13 | 2 | 195 | 20 | 2 | 1957 | 24 | 4 | 4 | 164 | 2436003 | -396144000
+ 02-28-1996 | 1996 | 2 | 28 | 1 | 199 | 20 | 2 | 1996 | 9 | 3 | 3 | 59 | 2450142 | 825465600
+ 02-29-1996 | 1996 | 2 | 29 | 1 | 199 | 20 | 2 | 1996 | 9 | 4 | 4 | 60 | 2450143 | 825552000
+ 03-01-1996 | 1996 | 3 | 1 | 1 | 199 | 20 | 2 | 1996 | 9 | 5 | 5 | 61 | 2450144 | 825638400
+ 03-02-1996 | 1996 | 3 | 2 | 1 | 199 | 20 | 2 | 1996 | 9 | 6 | 6 | 62 | 2450145 | 825724800
+ 02-28-1997 | 1997 | 2 | 28 | 1 | 199 | 20 | 2 | 1997 | 9 | 5 | 5 | 59 | 2450508 | 857088000
+ 03-01-1997 | 1997 | 3 | 1 | 1 | 199 | 20 | 2 | 1997 | 9 | 6 | 6 | 60 | 2450509 | 857174400
+ 03-02-1997 | 1997 | 3 | 2 | 1 | 199 | 20 | 2 | 1997 | 9 | 0 | 7 | 61 | 2450510 | 857260800
+ 04-01-2000 | 2000 | 4 | 1 | 2 | 200 | 20 | 2 | 2000 | 13 | 6 | 6 | 92 | 2451636 | 954547200
+ 04-02-2000 | 2000 | 4 | 2 | 2 | 200 | 20 | 2 | 2000 | 13 | 0 | 7 | 93 | 2451637 | 954633600
+ 04-03-2000 | 2000 | 4 | 3 | 2 | 200 | 20 | 2 | 2000 | 14 | 1 | 1 | 94 | 2451638 | 954720000
+ 04-08-2038 | 2038 | 4 | 8 | 2 | 203 | 21 | 3 | 2038 | 14 | 4 | 4 | 98 | 2465522 | 2154297600
+ 04-09-2039 | 2039 | 4 | 9 | 2 | 203 | 21 | 3 | 2039 | 14 | 6 | 6 | 99 | 2465888 | 2185920000
+ 04-10-2040 | 2040 | 4 | 10 | 2 | 204 | 21 | 3 | 2040 | 15 | 2 | 2 | 101 | 2466255 | 2217628800
+ 04-10-2040 BC | -2040 | 4 | 10 | 2 | -204 | -21 | -3 | -2040 | 15 | 1 | 1 | 100 | 976430 | -126503251200
+(16 rows)
+
--
-- epoch
--
@@ -1111,6 +1153,132 @@ SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
20
(1 row)
+--
+-- all possible fields
+--
+SELECT EXTRACT(MICROSECONDS FROM DATE '2020-08-11');
+ date_part
+-----------
+ 0
+(1 row)
+
+SELECT EXTRACT(MILLISECONDS FROM DATE '2020-08-11');
+ date_part
+-----------
+ 0
+(1 row)
+
+SELECT EXTRACT(SECOND FROM DATE '2020-08-11');
+ date_part
+-----------
+ 0
+(1 row)
+
+SELECT EXTRACT(MINUTE FROM DATE '2020-08-11');
+ date_part
+-----------
+ 0
+(1 row)
+
+SELECT EXTRACT(HOUR FROM DATE '2020-08-11');
+ date_part
+-----------
+ 0
+(1 row)
+
+SELECT EXTRACT(DAY FROM DATE '2020-08-11');
+ date_part
+-----------
+ 11
+(1 row)
+
+SELECT EXTRACT(MONTH FROM DATE '2020-08-11');
+ date_part
+-----------
+ 8
+(1 row)
+
+SELECT EXTRACT(YEAR FROM DATE '2020-08-11');
+ date_part
+-----------
+ 2020
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '2020-08-11');
+ date_part
+-----------
+ 202
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2020-08-11');
+ date_part
+-----------
+ 21
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2020-08-11');
+ date_part
+-----------
+ 3
+(1 row)
+
+SELECT EXTRACT(ISOYEAR FROM DATE '2020-08-11');
+ date_part
+-----------
+ 2020
+(1 row)
+
+SELECT EXTRACT(QUARTER FROM DATE '2020-08-11');
+ date_part
+-----------
+ 3
+(1 row)
+
+SELECT EXTRACT(WEEK FROM DATE '2020-08-11');
+ date_part
+-----------
+ 33
+(1 row)
+
+SELECT EXTRACT(DOW FROM DATE '2020-08-11');
+ date_part
+-----------
+ 2
+(1 row)
+
+SELECT EXTRACT(ISODOW FROM DATE '2020-08-11');
+ date_part
+-----------
+ 2
+(1 row)
+
+SELECT EXTRACT(DOY FROM DATE '2020-08-11');
+ date_part
+-----------
+ 224
+(1 row)
+
+SELECT EXTRACT(TIMEZONE FROM DATE '2020-08-11');
+ERROR: timestamp units "timezone" not supported
+CONTEXT: SQL function "date_part" statement 1
+SELECT EXTRACT(TIMEZONE_M FROM DATE '2020-08-11');
+ERROR: timestamp units "timezone_m" not supported
+CONTEXT: SQL function "date_part" statement 1
+SELECT EXTRACT(TIMEZONE_H FROM DATE '2020-08-11');
+ERROR: timestamp units "timezone_h" not supported
+CONTEXT: SQL function "date_part" statement 1
+SELECT EXTRACT(EPOCH FROM DATE '2020-08-11');
+ date_part
+------------
+ 1597104000
+(1 row)
+
+SELECT EXTRACT(JULIAN FROM DATE '2020-08-11');
+ date_part
+-----------
+ 2459073
+(1 row)
+
--
-- test trunc function!
--
diff --git a/src/test/regress/expected/expressions.out b/src/test/regress/expected/expressions.out
index 4f4deaec22..05a6eb07b2 100644
--- a/src/test/regress/expected/expressions.out
+++ b/src/test/regress/expected/expressions.out
@@ -121,7 +121,7 @@ select count(*) from date_tbl
where f1 not between '1997-01-01' and '1998-01-01';
count
-------
- 12
+ 13
(1 row)
explain (costs off)
@@ -155,6 +155,6 @@ select count(*) from date_tbl
where f1 not between symmetric '1997-01-01' and '1998-01-01';
count
-------
- 12
+ 13
(1 row)
diff --git a/src/test/regress/sql/date.sql b/src/test/regress/sql/date.sql
index 1c3adf70ce..488f5faa07 100644
--- a/src/test/regress/sql/date.sql
+++ b/src/test/regress/sql/date.sql
@@ -20,12 +20,13 @@ CREATE TABLE DATE_TBL (f1 date);
INSERT INTO DATE_TBL VALUES ('2038-04-08');
INSERT INTO DATE_TBL VALUES ('2039-04-09');
INSERT INTO DATE_TBL VALUES ('2040-04-10');
+INSERT INTO DATE_TBL VALUES ('2040-04-10 BC');
-SELECT f1 AS "Fifteen" FROM DATE_TBL;
+SELECT f1 FROM DATE_TBL;
-SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
+SELECT f1 FROM DATE_TBL WHERE f1 < '2000-01-01';
-SELECT f1 AS "Three" FROM DATE_TBL
+SELECT f1 FROM DATE_TBL
WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
--
@@ -218,6 +219,23 @@ CREATE TABLE DATE_TBL (f1 date);
--
-- test extract!
--
+SELECT f1 as "date",
+ date_part('year', f1) AS year,
+ date_part('month', f1) AS month,
+ date_part('day', f1) AS day,
+ date_part('quarter', f1) AS quarter,
+ date_part('decade', f1) AS decade,
+ date_part('century', f1) AS century,
+ date_part('millennium', f1) AS millennium,
+ date_part('isoyear', f1) AS isoyear,
+ date_part('week', f1) AS week,
+ date_part('dow', f1) AS dow,
+ date_part('isodow', f1) AS isodow,
+ date_part('doy', f1) AS doy,
+ date_part('julian', f1) AS julian,
+ date_part('epoch', f1) AS epoch
+ FROM date_tbl;
+--
-- epoch
--
SELECT EXTRACT(EPOCH FROM DATE '1970-01-01'); -- 0
@@ -264,6 +282,31 @@ CREATE TABLE DATE_TBL (f1 date);
SELECT EXTRACT(CENTURY FROM NOW())>=21 AS True; -- true
SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
--
+-- all possible fields
+--
+SELECT EXTRACT(MICROSECONDS FROM DATE '2020-08-11');
+SELECT EXTRACT(MILLISECONDS FROM DATE '2020-08-11');
+SELECT EXTRACT(SECOND FROM DATE '2020-08-11');
+SELECT EXTRACT(MINUTE FROM DATE '2020-08-11');
+SELECT EXTRACT(HOUR FROM DATE '2020-08-11');
+SELECT EXTRACT(DAY FROM DATE '2020-08-11');
+SELECT EXTRACT(MONTH FROM DATE '2020-08-11');
+SELECT EXTRACT(YEAR FROM DATE '2020-08-11');
+SELECT EXTRACT(DECADE FROM DATE '2020-08-11');
+SELECT EXTRACT(CENTURY FROM DATE '2020-08-11');
+SELECT EXTRACT(MILLENNIUM FROM DATE '2020-08-11');
+SELECT EXTRACT(ISOYEAR FROM DATE '2020-08-11');
+SELECT EXTRACT(QUARTER FROM DATE '2020-08-11');
+SELECT EXTRACT(WEEK FROM DATE '2020-08-11');
+SELECT EXTRACT(DOW FROM DATE '2020-08-11');
+SELECT EXTRACT(ISODOW FROM DATE '2020-08-11');
+SELECT EXTRACT(DOY FROM DATE '2020-08-11');
+SELECT EXTRACT(TIMEZONE FROM DATE '2020-08-11');
+SELECT EXTRACT(TIMEZONE_M FROM DATE '2020-08-11');
+SELECT EXTRACT(TIMEZONE_H FROM DATE '2020-08-11');
+SELECT EXTRACT(EPOCH FROM DATE '2020-08-11');
+SELECT EXTRACT(JULIAN FROM DATE '2020-08-11');
+--
-- test trunc function!
--
SELECT DATE_TRUNC('MILLENNIUM', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1001
--
2.28.0
v2-0002-Expose-internal-function-for-converting-int64-to-.patchtext/plain; charset=UTF-8; name=v2-0002-Expose-internal-function-for-converting-int64-to-.patch; x-mac-creator=0; x-mac-type=0Download
From 049021accfaa28acf0a515261b22e618c6f4ac71 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter@eisentraut.org>
Date: Wed, 12 Aug 2020 08:46:21 +0200
Subject: [PATCH v2 2/3] Expose internal function for converting int64 to
numeric
---
contrib/btree_gist/btree_numeric.c | 2 +-
contrib/jsonb_plperl/jsonb_plperl.c | 4 +-
src/backend/utils/adt/dbsize.c | 11 +--
src/backend/utils/adt/formatting.c | 19 ++---
src/backend/utils/adt/jsonpath_exec.c | 11 +--
src/backend/utils/adt/numeric.c | 116 +++++++-------------------
src/include/utils/numeric.h | 2 +
7 files changed, 43 insertions(+), 122 deletions(-)
diff --git a/contrib/btree_gist/btree_numeric.c b/contrib/btree_gist/btree_numeric.c
index d66901680e..35e466cdd9 100644
--- a/contrib/btree_gist/btree_numeric.c
+++ b/contrib/btree_gist/btree_numeric.c
@@ -195,7 +195,7 @@ gbt_numeric_penalty(PG_FUNCTION_ARGS)
}
else
{
- Numeric nul = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(0)));
+ Numeric nul = int64_to_numeric(0);
*result = 0.0;
diff --git a/contrib/jsonb_plperl/jsonb_plperl.c b/contrib/jsonb_plperl/jsonb_plperl.c
index b81ba54b80..22e90afe1b 100644
--- a/contrib/jsonb_plperl/jsonb_plperl.c
+++ b/contrib/jsonb_plperl/jsonb_plperl.c
@@ -216,9 +216,7 @@ SV_to_JsonbValue(SV *in, JsonbParseState **jsonb_state, bool is_elem)
IV ival = SvIV(in);
out.type = jbvNumeric;
- out.val.numeric =
- DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- Int64GetDatum((int64) ival)));
+ out.val.numeric = int64_to_numeric(ival);
}
else if (SvNOK(in))
{
diff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c
index 2320c06a9b..793235d00c 100644
--- a/src/backend/utils/adt/dbsize.c
+++ b/src/backend/utils/adt/dbsize.c
@@ -579,14 +579,6 @@ numeric_to_cstring(Numeric n)
return DatumGetCString(DirectFunctionCall1(numeric_out, d));
}
-static Numeric
-int64_to_numeric(int64 v)
-{
- Datum d = Int64GetDatum(v);
-
- return DatumGetNumeric(DirectFunctionCall1(int8_numeric, d));
-}
-
static bool
numeric_is_less(Numeric a, Numeric b)
{
@@ -832,8 +824,7 @@ pg_size_bytes(PG_FUNCTION_ARGS)
{
Numeric mul_num;
- mul_num = DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- Int64GetDatum(multiplier)));
+ mul_num = int64_to_numeric(multiplier);
num = DatumGetNumeric(DirectFunctionCall2(numeric_mul,
NumericGetDatum(mul_num),
diff --git a/src/backend/utils/adt/formatting.c b/src/backend/utils/adt/formatting.c
index 9de63686ec..634a77e560 100644
--- a/src/backend/utils/adt/formatting.c
+++ b/src/backend/utils/adt/formatting.c
@@ -6071,10 +6071,8 @@ numeric_to_number(PG_FUNCTION_ARGS)
if (IS_MULTI(&Num))
{
Numeric x;
- Numeric a = DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- Int32GetDatum(10)));
- Numeric b = DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- Int32GetDatum(-Num.multi)));
+ Numeric a = int64_to_numeric(10);
+ Numeric b = int64_to_numeric(-Num.multi);
x = DatumGetNumeric(DirectFunctionCall2(numeric_power,
NumericGetDatum(a),
@@ -6163,10 +6161,8 @@ numeric_to_char(PG_FUNCTION_ARGS)
if (IS_MULTI(&Num))
{
- Numeric a = DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- Int32GetDatum(10)));
- Numeric b = DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- Int32GetDatum(Num.multi)));
+ Numeric a = int64_to_numeric(10);
+ Numeric b = int64_to_numeric(Num.multi);
x = DatumGetNumeric(DirectFunctionCall2(numeric_power,
NumericGetDatum(a),
@@ -6340,11 +6336,8 @@ int8_to_char(PG_FUNCTION_ARGS)
else if (IS_EEEE(&Num))
{
/* to avoid loss of precision, must go via numeric not float8 */
- Numeric val;
-
- val = DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- Int64GetDatum(value)));
- orgnum = numeric_out_sci(val, Num.post);
+ orgnum = numeric_out_sci(int64_to_numeric(value),
+ Num.post);
/*
* numeric_out_sci() does not emit a sign for positive numbers. We
diff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c
index f146767bfc..7403c760b4 100644
--- a/src/backend/utils/adt/jsonpath_exec.c
+++ b/src/backend/utils/adt/jsonpath_exec.c
@@ -842,9 +842,7 @@ executeItemOptUnwrapTarget(JsonPathExecContext *cxt, JsonPathItem *jsp,
lastjbv = hasNext ? &tmpjbv : palloc(sizeof(*lastjbv));
lastjbv->type = jbvNumeric;
- lastjbv->val.numeric =
- DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- Int32GetDatum(last)));
+ lastjbv->val.numeric = int64_to_numeric(last);
res = executeNextItem(cxt, jsp, &elem,
lastjbv, found, hasNext);
@@ -1012,9 +1010,7 @@ executeItemOptUnwrapTarget(JsonPathExecContext *cxt, JsonPathItem *jsp,
jb = palloc(sizeof(*jb));
jb->type = jbvNumeric;
- jb->val.numeric =
- DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- Int32GetDatum(size)));
+ jb->val.numeric = int64_to_numeric(size);
res = executeNextItem(cxt, jsp, NULL, jb, found, false);
}
@@ -1979,8 +1975,7 @@ executeKeyValueMethod(JsonPathExecContext *cxt, JsonPathItem *jsp,
id += (int64) cxt->baseObject.id * INT64CONST(10000000000);
idval.type = jbvNumeric;
- idval.val.numeric = DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- Int64GetDatum(id)));
+ idval.val.numeric = int64_to_numeric(id);
it = JsonbIteratorInit(jbc);
diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c
index ed825a1fdd..c2f04421f7 100644
--- a/src/backend/utils/adt/numeric.c
+++ b/src/backend/utils/adt/numeric.c
@@ -4073,23 +4073,29 @@ numeric_trim_scale(PG_FUNCTION_ARGS)
* ----------------------------------------------------------------------
*/
-
-Datum
-int4_numeric(PG_FUNCTION_ARGS)
+Numeric
+int64_to_numeric(int64 val)
{
- int32 val = PG_GETARG_INT32(0);
Numeric res;
NumericVar result;
init_var(&result);
- int64_to_numericvar((int64) val, &result);
+ int64_to_numericvar(val, &result);
res = make_result(&result);
free_var(&result);
- PG_RETURN_NUMERIC(res);
+ return res;
+}
+
+Datum
+int4_numeric(PG_FUNCTION_ARGS)
+{
+ int32 val = PG_GETARG_INT32(0);
+
+ PG_RETURN_NUMERIC(int64_to_numeric(val));
}
int32
@@ -4174,18 +4180,8 @@ Datum
int8_numeric(PG_FUNCTION_ARGS)
{
int64 val = PG_GETARG_INT64(0);
- Numeric res;
- NumericVar result;
- init_var(&result);
-
- int64_to_numericvar(val, &result);
-
- res = make_result(&result);
-
- free_var(&result);
-
- PG_RETURN_NUMERIC(res);
+ PG_RETURN_NUMERIC(int64_to_numeric(val));
}
@@ -4224,18 +4220,8 @@ Datum
int2_numeric(PG_FUNCTION_ARGS)
{
int16 val = PG_GETARG_INT16(0);
- Numeric res;
- NumericVar result;
-
- init_var(&result);
-
- int64_to_numericvar((int64) val, &result);
-
- res = make_result(&result);
- free_var(&result);
-
- PG_RETURN_NUMERIC(res);
+ PG_RETURN_NUMERIC(int64_to_numeric(val));
}
@@ -5290,11 +5276,7 @@ int2_accum(PG_FUNCTION_ARGS)
#ifdef HAVE_INT128
do_int128_accum(state, (int128) PG_GETARG_INT16(1));
#else
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int2_numeric,
- PG_GETARG_DATUM(1)));
- do_numeric_accum(state, newval);
+ do_numeric_accum(state, int64_to_numeric(PG_GETARG_INT16(1)));
#endif
}
@@ -5317,11 +5299,7 @@ int4_accum(PG_FUNCTION_ARGS)
#ifdef HAVE_INT128
do_int128_accum(state, (int128) PG_GETARG_INT32(1));
#else
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- PG_GETARG_DATUM(1)));
- do_numeric_accum(state, newval);
+ do_numeric_accum(state, int64_to_numeric(PG_GETARG_INT32(1)));
#endif
}
@@ -5340,13 +5318,7 @@ int8_accum(PG_FUNCTION_ARGS)
state = makeNumericAggState(fcinfo, true);
if (!PG_ARGISNULL(1))
- {
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- PG_GETARG_DATUM(1)));
- do_numeric_accum(state, newval);
- }
+ do_numeric_accum(state, int64_to_numeric(PG_GETARG_INT64(1)));
PG_RETURN_POINTER(state);
}
@@ -5570,11 +5542,7 @@ int8_avg_accum(PG_FUNCTION_ARGS)
#ifdef HAVE_INT128
do_int128_accum(state, (int128) PG_GETARG_INT64(1));
#else
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- PG_GETARG_DATUM(1)));
- do_numeric_accum(state, newval);
+ do_numeric_accum(state, int64_to_numeric(PG_GETARG_INT64(1)));
#endif
}
@@ -5767,13 +5735,8 @@ int2_accum_inv(PG_FUNCTION_ARGS)
#ifdef HAVE_INT128
do_int128_discard(state, (int128) PG_GETARG_INT16(1));
#else
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int2_numeric,
- PG_GETARG_DATUM(1)));
-
/* Should never fail, all inputs have dscale 0 */
- if (!do_numeric_discard(state, newval))
+ if (!do_numeric_discard(state, int64_to_numeric(PG_GETARG_INT16(1))))
elog(ERROR, "do_numeric_discard failed unexpectedly");
#endif
}
@@ -5797,13 +5760,8 @@ int4_accum_inv(PG_FUNCTION_ARGS)
#ifdef HAVE_INT128
do_int128_discard(state, (int128) PG_GETARG_INT32(1));
#else
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- PG_GETARG_DATUM(1)));
-
/* Should never fail, all inputs have dscale 0 */
- if (!do_numeric_discard(state, newval))
+ if (!do_numeric_discard(state, int64_to_numeric(PG_GETARG_INT32(1))))
elog(ERROR, "do_numeric_discard failed unexpectedly");
#endif
}
@@ -5824,13 +5782,8 @@ int8_accum_inv(PG_FUNCTION_ARGS)
if (!PG_ARGISNULL(1))
{
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- PG_GETARG_DATUM(1)));
-
/* Should never fail, all inputs have dscale 0 */
- if (!do_numeric_discard(state, newval))
+ if (!do_numeric_discard(state, int64_to_numeric(PG_GETARG_INT64(1))))
elog(ERROR, "do_numeric_discard failed unexpectedly");
}
@@ -5853,13 +5806,8 @@ int8_avg_accum_inv(PG_FUNCTION_ARGS)
#ifdef HAVE_INT128
do_int128_discard(state, (int128) PG_GETARG_INT64(1));
#else
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- PG_GETARG_DATUM(1)));
-
/* Should never fail, all inputs have dscale 0 */
- if (!do_numeric_discard(state, newval))
+ if (!do_numeric_discard(state, int64_to_numeric(PG_GETARG_INT64(1))))
elog(ERROR, "do_numeric_discard failed unexpectedly");
#endif
}
@@ -5914,8 +5862,7 @@ numeric_poly_avg(PG_FUNCTION_ARGS)
int128_to_numericvar(state->sumX, &result);
- countd = DirectFunctionCall1(int8_numeric,
- Int64GetDatumFast(state->N));
+ countd = NumericGetDatum(int64_to_numeric(state->N));
sumd = NumericGetDatum(make_result(&result));
free_var(&result);
@@ -5951,7 +5898,7 @@ numeric_avg(PG_FUNCTION_ARGS)
if (state->nInfcount > 0)
PG_RETURN_NUMERIC(make_result(&const_ninf));
- N_datum = DirectFunctionCall1(int8_numeric, Int64GetDatum(state->N));
+ N_datum = NumericGetDatum(int64_to_numeric(state->N));
init_var(&sumX_var);
accum_sum_final(&state->sumX, &sumX_var);
@@ -6411,7 +6358,6 @@ Datum
int8_sum(PG_FUNCTION_ARGS)
{
Numeric oldsum;
- Datum newval;
if (PG_ARGISNULL(0))
{
@@ -6419,8 +6365,7 @@ int8_sum(PG_FUNCTION_ARGS)
if (PG_ARGISNULL(1))
PG_RETURN_NULL(); /* still no non-null */
/* This is the first non-null input. */
- newval = DirectFunctionCall1(int8_numeric, PG_GETARG_DATUM(1));
- PG_RETURN_DATUM(newval);
+ PG_RETURN_NUMERIC(int64_to_numeric(PG_GETARG_INT64(1)));
}
/*
@@ -6436,10 +6381,9 @@ int8_sum(PG_FUNCTION_ARGS)
PG_RETURN_NUMERIC(oldsum);
/* OK to do the addition. */
- newval = DirectFunctionCall1(int8_numeric, PG_GETARG_DATUM(1));
-
PG_RETURN_DATUM(DirectFunctionCall2(numeric_add,
- NumericGetDatum(oldsum), newval));
+ NumericGetDatum(oldsum),
+ NumericGetDatum(int64_to_numeric(1))));
}
@@ -6618,10 +6562,8 @@ int8_avg(PG_FUNCTION_ARGS)
if (transdata->count == 0)
PG_RETURN_NULL();
- countd = DirectFunctionCall1(int8_numeric,
- Int64GetDatumFast(transdata->count));
- sumd = DirectFunctionCall1(int8_numeric,
- Int64GetDatumFast(transdata->sum));
+ countd = NumericGetDatum(int64_to_numeric(transdata->count));
+ sumd = NumericGetDatum(int64_to_numeric(transdata->sum));
PG_RETURN_DATUM(DirectFunctionCall2(numeric_div, sumd, countd));
}
diff --git a/src/include/utils/numeric.h b/src/include/utils/numeric.h
index 0b7d4ba3c4..2a768b9a04 100644
--- a/src/include/utils/numeric.h
+++ b/src/include/utils/numeric.h
@@ -62,6 +62,8 @@ int32 numeric_maximum_size(int32 typmod);
extern char *numeric_out_sci(Numeric num, int scale);
extern char *numeric_normalize(Numeric num);
+extern Numeric int64_to_numeric(int64 val);
+
extern Numeric numeric_add_opt_error(Numeric num1, Numeric num2,
bool *have_error);
extern Numeric numeric_sub_opt_error(Numeric num1, Numeric num2,
--
2.28.0
v2-0003-Change-return-type-of-EXTRACT-to-numeric.patchtext/plain; charset=UTF-8; name=v2-0003-Change-return-type-of-EXTRACT-to-numeric.patch; x-mac-creator=0; x-mac-type=0Download
From f722bfbe7d5e0b15c1f5b8a692f04b9680cfcd69 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter@eisentraut.org>
Date: Wed, 12 Aug 2020 08:46:21 +0200
Subject: [PATCH v2 3/3] Change return type of EXTRACT to numeric
Discussion: https://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu
---
doc/src/sgml/func.sgml | 10 +-
src/backend/utils/adt/date.c | 265 +++++++++++++++++--
src/backend/utils/adt/timestamp.c | 305 +++++++++++++++-------
src/include/catalog/pg_proc.dat | 15 +-
src/test/regress/expected/date.out | 100 ++-----
src/test/regress/expected/interval.out | 24 +-
src/test/regress/expected/timestamp.out | 268 +++++++++----------
src/test/regress/expected/timestamptz.out | 272 +++++++++----------
src/test/regress/sql/date.sql | 6 +-
9 files changed, 766 insertions(+), 499 deletions(-)
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index f766c1bc67..f8d6ad62be 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -8626,7 +8626,7 @@ <title>Date/Time Functions</title>
<primary>date_part</primary>
</indexterm>
<function>date_part</function> ( <type>text</type>, <type>timestamp</type> )
- <returnvalue>double precision</returnvalue>
+ <returnvalue>numeric</returnvalue>
</para>
<para>
Get timestamp subfield (equivalent to <function>extract</function>);
@@ -8641,7 +8641,7 @@ <title>Date/Time Functions</title>
<row>
<entry role="func_table_entry"><para role="func_signature">
<function>date_part</function> ( <type>text</type>, <type>interval</type> )
- <returnvalue>double precision</returnvalue>
+ <returnvalue>numeric</returnvalue>
</para>
<para>
Get interval subfield (equivalent to <function>extract</function>);
@@ -8706,7 +8706,7 @@ <title>Date/Time Functions</title>
<primary>extract</primary>
</indexterm>
<function>extract</function> ( <parameter>field</parameter> <literal>from</literal> <type>timestamp</type> )
- <returnvalue>double precision</returnvalue>
+ <returnvalue>numeric</returnvalue>
</para>
<para>
Get timestamp subfield; see <xref linkend="functions-datetime-extract"/>
@@ -8720,7 +8720,7 @@ <title>Date/Time Functions</title>
<row>
<entry role="func_table_entry"><para role="func_signature">
<function>extract</function> ( <parameter>field</parameter> <literal>from</literal> <type>interval</type> )
- <returnvalue>double precision</returnvalue>
+ <returnvalue>numeric</returnvalue>
</para>
<para>
Get interval subfield; see <xref linkend="functions-datetime-extract"/>
@@ -9227,7 +9227,7 @@ <title><function>EXTRACT</function>, <function>date_part</function></title>
well.) <replaceable>field</replaceable> is an identifier or
string that selects what field to extract from the source value.
The <function>extract</function> function returns values of type
- <type>double precision</type>.
+ <type>numeric</type>.
The following are valid field names:
<!-- alphabetical -->
diff --git a/src/backend/utils/adt/date.c b/src/backend/utils/adt/date.c
index eaaffa7137..b42fe6d1fc 100644
--- a/src/backend/utils/adt/date.c
+++ b/src/backend/utils/adt/date.c
@@ -31,6 +31,7 @@
#include "utils/builtins.h"
#include "utils/date.h"
#include "utils/datetime.h"
+#include "utils/numeric.h"
#include "utils/sortsupport.h"
/*
@@ -1092,6 +1093,180 @@ in_range_date_interval(PG_FUNCTION_ARGS)
}
+/* date_part()
+ * Extract specified field from date type.
+ */
+Datum
+date_part(PG_FUNCTION_ARGS)
+{
+ text *units = PG_GETARG_TEXT_PP(0);
+ DateADT date = PG_GETARG_DATEADT(1);
+ int64 result;
+ int type,
+ val;
+ char *lowunits;
+ int year, mon, mday;
+
+ lowunits = downcase_truncate_identifier(VARDATA_ANY(units),
+ VARSIZE_ANY_EXHDR(units),
+ false);
+
+ type = DecodeUnits(0, lowunits, &val);
+ if (type == UNKNOWN_FIELD)
+ type = DecodeSpecial(0, lowunits, &val);
+
+ if (DATE_NOT_FINITE(date) && (type == UNITS || type == RESERV))
+ {
+ switch (val)
+ {
+ /* Oscillating units */
+ case DTK_DAY:
+ case DTK_MONTH:
+ case DTK_QUARTER:
+ case DTK_WEEK:
+ case DTK_DOW:
+ case DTK_ISODOW:
+ case DTK_DOY:
+ PG_RETURN_NULL();
+ break;
+
+ /* Monotonically-increasing units */
+ case DTK_YEAR:
+ case DTK_DECADE:
+ case DTK_CENTURY:
+ case DTK_MILLENNIUM:
+ case DTK_JULIAN:
+ case DTK_ISOYEAR:
+ case DTK_EPOCH:
+ if (DATE_IS_NOBEGIN(date))
+ PG_RETURN_NUMERIC(DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("-Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1))));
+ else
+ PG_RETURN_NUMERIC(DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1))));
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("date units \"%s\" not supported",
+ lowunits)));
+ }
+ }
+ else if (type == UNITS)
+ {
+ j2date(date + POSTGRES_EPOCH_JDATE, &year, &mon, &mday);
+
+ switch (val)
+ {
+ case DTK_DAY:
+ result = mday;
+ break;
+
+ case DTK_MONTH:
+ result = mon;
+ break;
+
+ case DTK_QUARTER:
+ result = (mon - 1) / 3 + 1;
+ break;
+
+ case DTK_WEEK:
+ result = date2isoweek(year, mon, mday);
+ break;
+
+ case DTK_YEAR:
+ if (year > 0)
+ result = year;
+ else
+ /* there is no year 0, just 1 BC and 1 AD */
+ result = year - 1;
+ break;
+
+ case DTK_DECADE:
+ /* see comments in timestamp_part */
+ if (year >= 0)
+ result = year / 10;
+ else
+ result = -((8 - (year - 1)) / 10);
+ break;
+
+ case DTK_CENTURY:
+ /* see comments in timestamp_part */
+ if (year > 0)
+ result = (year + 99) / 100;
+ else
+ result = -((99 - (year - 1)) / 100);
+ break;
+
+ case DTK_MILLENNIUM:
+ /* see comments in timestamp_part */
+ if (year > 0)
+ result = (year + 999) / 1000;
+ else
+ result = -((999 - (year - 1)) / 1000);
+ break;
+
+ case DTK_JULIAN:
+ result = date + POSTGRES_EPOCH_JDATE;
+ break;
+
+ case DTK_ISOYEAR:
+ result = date2isoyear(year, mon, mday);
+ /* Adjust BC years */
+ if (result <= 0)
+ result -= 1;
+ break;
+
+ case DTK_DOW:
+ case DTK_ISODOW:
+ result = j2day(date + POSTGRES_EPOCH_JDATE);
+ if (val == DTK_ISODOW && result == 0)
+ result = 7;
+ break;
+
+ case DTK_DOY:
+ result = date2j(year, mon, mday) - date2j(year, 1, 1) + 1;
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("date units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+ }
+ else if (type == RESERV)
+ {
+ switch (val)
+ {
+ case DTK_EPOCH:
+ result = ((int64) date + POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * SECS_PER_DAY;
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("date units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+ }
+ else
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("date units \"%s\" not recognized", lowunits)));
+ result = 0;
+ }
+
+ PG_RETURN_NUMERIC(int64_to_numeric(result));
+}
+
+
/* Add an interval to a date, giving a new date.
* Must handle both positive and negative intervals.
*
@@ -1986,7 +2161,7 @@ time_part(PG_FUNCTION_ARGS)
{
text *units = PG_GETARG_TEXT_PP(0);
TimeADT time = PG_GETARG_TIMEADT(1);
- float8 result;
+ Numeric result;
int type,
val;
char *lowunits;
@@ -2010,23 +2185,39 @@ time_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_MICROSEC:
- result = tm->tm_sec * 1000000.0 + fsec;
+ result = int64_to_numeric(tm->tm_sec * 1000000 + fsec);
break;
case DTK_MILLISEC:
- result = tm->tm_sec * 1000.0 + fsec / 1000.0;
+ /* tm->tm_sec * 1000 + fsec / 1000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec * 1000),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(3)));
break;
case DTK_SECOND:
- result = tm->tm_sec + fsec / 1000000.0;
+ /* tm->tm_sec + fsec / 1000000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
break;
case DTK_MINUTE:
- result = tm->tm_min;
+ result = int64_to_numeric(tm->tm_min);
break;
case DTK_HOUR:
- result = tm->tm_hour;
+ result = int64_to_numeric(tm->tm_hour);
break;
case DTK_TZ:
@@ -2050,7 +2241,12 @@ time_part(PG_FUNCTION_ARGS)
}
else if (type == RESERV && val == DTK_EPOCH)
{
- result = time / 1000000.0;
+ result = numeric_div_opt_error(int64_to_numeric(time),
+ int64_to_numeric(1000000),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
}
else
{
@@ -2061,7 +2257,7 @@ time_part(PG_FUNCTION_ARGS)
result = 0;
}
- PG_RETURN_FLOAT8(result);
+ PG_RETURN_NUMERIC(result);
}
@@ -2723,7 +2919,7 @@ timetz_part(PG_FUNCTION_ARGS)
{
text *units = PG_GETARG_TEXT_PP(0);
TimeTzADT *time = PG_GETARG_TIMETZADT_P(1);
- float8 result;
+ Numeric result;
int type,
val;
char *lowunits;
@@ -2738,7 +2934,6 @@ timetz_part(PG_FUNCTION_ARGS)
if (type == UNITS)
{
- double dummy;
int tz;
fsec_t fsec;
struct pg_tm tt,
@@ -2749,38 +2944,54 @@ timetz_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_TZ:
- result = -tz;
+ result = int64_to_numeric(-tz);
break;
case DTK_TZ_MINUTE:
- result = -tz;
- result /= SECS_PER_MINUTE;
- FMODULO(result, dummy, (double) SECS_PER_MINUTE);
+ /* trunc(-tz / 60) % 60 */
+ result = numeric_mod_opt_error(int64_to_numeric(-tz / SECS_PER_MINUTE),
+ int64_to_numeric(SECS_PER_MINUTE),
+ NULL);
break;
case DTK_TZ_HOUR:
- dummy = -tz;
- FMODULO(dummy, result, (double) SECS_PER_HOUR);
+ result = int64_to_numeric(-tz / SECS_PER_HOUR);
break;
case DTK_MICROSEC:
- result = tm->tm_sec * 1000000.0 + fsec;
+ result = int64_to_numeric(tm->tm_sec * 1000000 + fsec);
break;
case DTK_MILLISEC:
- result = tm->tm_sec * 1000.0 + fsec / 1000.0;
+ /* tm->tm_sec * 1000 + fsec / 1000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec * 1000),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(3)));
break;
case DTK_SECOND:
- result = tm->tm_sec + fsec / 1000000.0;
+ /* tm->tm_sec + fsec / 1000000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
break;
case DTK_MINUTE:
- result = tm->tm_min;
+ result = int64_to_numeric(tm->tm_min);
break;
case DTK_HOUR:
- result = tm->tm_hour;
+ result = int64_to_numeric(tm->tm_hour);
break;
case DTK_DAY:
@@ -2800,7 +3011,15 @@ timetz_part(PG_FUNCTION_ARGS)
}
else if (type == RESERV && val == DTK_EPOCH)
{
- result = time->time / 1000000.0 + time->zone;
+ /* time->time / 1000000.0 + time->zone */
+ result = numeric_add_opt_error(numeric_div_opt_error(int64_to_numeric(time->time),
+ int64_to_numeric(1000000),
+ NULL),
+ int64_to_numeric(time->zone),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
}
else
{
@@ -2811,7 +3030,7 @@ timetz_part(PG_FUNCTION_ARGS)
result = 0;
}
- PG_RETURN_FLOAT8(result);
+ PG_RETURN_NUMERIC(result);
}
/* timetz_zone()
diff --git a/src/backend/utils/adt/timestamp.c b/src/backend/utils/adt/timestamp.c
index 5fe304cea7..8bc522b26e 100644
--- a/src/backend/utils/adt/timestamp.c
+++ b/src/backend/utils/adt/timestamp.c
@@ -35,6 +35,7 @@
#include "utils/date.h"
#include "utils/datetime.h"
#include "utils/float.h"
+#include "utils/numeric.h"
/*
* gcc's -ffast-math switch breaks routines that expect exact results from
@@ -4436,13 +4437,13 @@ date2isoyearday(int year, int mon, int mday)
*
* Used by timestamp_part and timestamptz_part when extracting from infinite
* timestamp[tz]. Returns +/-Infinity if that is the appropriate result,
- * otherwise returns zero (which should be taken as meaning to return NULL).
+ * otherwise returns NULL (which should be taken as meaning to return SQL NULL).
*
* Errors thrown here for invalid units should exactly match those that
* would be thrown in the calling functions, else there will be unexpected
* discrepancies between finite- and infinite-input cases.
*/
-static float8
+static Numeric
NonFiniteTimestampTzPart(int type, int unit, char *lowunits,
bool isNegative, bool isTz)
{
@@ -4478,7 +4479,7 @@ NonFiniteTimestampTzPart(int type, int unit, char *lowunits,
case DTK_TZ:
case DTK_TZ_MINUTE:
case DTK_TZ_HOUR:
- return 0.0;
+ return NULL;
/* Monotonically-increasing units */
case DTK_YEAR:
@@ -4489,9 +4490,15 @@ NonFiniteTimestampTzPart(int type, int unit, char *lowunits,
case DTK_ISOYEAR:
case DTK_EPOCH:
if (isNegative)
- return -get_float8_infinity();
+ return DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("-Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1)));
else
- return get_float8_infinity();
+ return DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1)));
default:
if (isTz)
@@ -4504,7 +4511,7 @@ NonFiniteTimestampTzPart(int type, int unit, char *lowunits,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("timestamp units \"%s\" not supported",
lowunits)));
- return 0.0; /* keep compiler quiet */
+ return NULL; /* keep compiler quiet */
}
}
@@ -4516,7 +4523,7 @@ timestamp_part(PG_FUNCTION_ARGS)
{
text *units = PG_GETARG_TEXT_PP(0);
Timestamp timestamp = PG_GETARG_TIMESTAMP(1);
- float8 result;
+ Numeric result;
Timestamp epoch;
int type,
val;
@@ -4539,7 +4546,7 @@ timestamp_part(PG_FUNCTION_ARGS)
TIMESTAMP_IS_NOBEGIN(timestamp),
false);
if (result)
- PG_RETURN_FLOAT8(result);
+ PG_RETURN_NUMERIC(result);
else
PG_RETURN_NULL();
}
@@ -4554,47 +4561,63 @@ timestamp_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_MICROSEC:
- result = tm->tm_sec * 1000000.0 + fsec;
+ result = int64_to_numeric(tm->tm_sec * 1000000 + fsec);
break;
case DTK_MILLISEC:
- result = tm->tm_sec * 1000.0 + fsec / 1000.0;
+ /* tm->tm_sec * 1000 + fsec / 1000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec * 1000),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(3)));
break;
case DTK_SECOND:
- result = tm->tm_sec + fsec / 1000000.0;
+ /* tm->tm_sec + fsec / 1000000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
break;
case DTK_MINUTE:
- result = tm->tm_min;
+ result = int64_to_numeric(tm->tm_min);
break;
case DTK_HOUR:
- result = tm->tm_hour;
+ result = int64_to_numeric(tm->tm_hour);
break;
case DTK_DAY:
- result = tm->tm_mday;
+ result = int64_to_numeric(tm->tm_mday);
break;
case DTK_MONTH:
- result = tm->tm_mon;
+ result = int64_to_numeric(tm->tm_mon);
break;
case DTK_QUARTER:
- result = (tm->tm_mon - 1) / 3 + 1;
+ result = int64_to_numeric((tm->tm_mon - 1) / 3 + 1);
break;
case DTK_WEEK:
- result = (float8) date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ result = int64_to_numeric(date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday));
break;
case DTK_YEAR:
if (tm->tm_year > 0)
- result = tm->tm_year;
+ result = int64_to_numeric(tm->tm_year);
else
/* there is no year 0, just 1 BC and 1 AD */
- result = tm->tm_year - 1;
+ result = int64_to_numeric(tm->tm_year - 1);
break;
case DTK_DECADE:
@@ -4605,9 +4628,9 @@ timestamp_part(PG_FUNCTION_ARGS)
* is 11 BC thru 2 BC...
*/
if (tm->tm_year >= 0)
- result = tm->tm_year / 10;
+ result = int64_to_numeric(tm->tm_year / 10);
else
- result = -((8 - (tm->tm_year - 1)) / 10);
+ result = int64_to_numeric(-((8 - (tm->tm_year - 1)) / 10));
break;
case DTK_CENTURY:
@@ -4619,43 +4642,53 @@ timestamp_part(PG_FUNCTION_ARGS)
* ----
*/
if (tm->tm_year > 0)
- result = (tm->tm_year + 99) / 100;
+ result = int64_to_numeric((tm->tm_year + 99) / 100);
else
/* caution: C division may have negative remainder */
- result = -((99 - (tm->tm_year - 1)) / 100);
+ result = int64_to_numeric(-((99 - (tm->tm_year - 1)) / 100));
break;
case DTK_MILLENNIUM:
/* see comments above. */
if (tm->tm_year > 0)
- result = (tm->tm_year + 999) / 1000;
+ result = int64_to_numeric((tm->tm_year + 999) / 1000);
else
- result = -((999 - (tm->tm_year - 1)) / 1000);
+ result = int64_to_numeric(-((999 - (tm->tm_year - 1)) / 1000));
break;
case DTK_JULIAN:
- result = date2j(tm->tm_year, tm->tm_mon, tm->tm_mday);
- result += ((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) +
- tm->tm_sec + (fsec / 1000000.0)) / (double) SECS_PER_DAY;
+ result = numeric_add_opt_error(
+ int64_to_numeric(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)),
+ numeric_div_opt_error(
+ int64_to_numeric(((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) + tm->tm_sec) * 1000000LL + fsec),
+ int64_to_numeric(SECS_PER_DAY * 1000000LL),
+ NULL),
+ NULL);
break;
case DTK_ISOYEAR:
- result = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ {
+ int tmp = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
/* Adjust BC years */
- if (result <= 0)
- result -= 1;
+ if (tmp <= 0)
+ tmp -= 1;
+ result = int64_to_numeric(tmp);
break;
+ }
case DTK_DOW:
case DTK_ISODOW:
- result = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
- if (val == DTK_ISODOW && result == 0)
- result = 7;
+ {
+ int tmp = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
+ if (val == DTK_ISODOW && tmp == 0)
+ tmp = 7;
+ result = int64_to_numeric(tmp);
break;
+ }
case DTK_DOY:
- result = (date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
- - date2j(tm->tm_year, 1, 1) + 1);
+ result = int64_to_numeric(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
+ - date2j(tm->tm_year, 1, 1) + 1);
break;
case DTK_TZ:
@@ -4674,12 +4707,23 @@ timestamp_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_EPOCH:
+ /* (timestamp - epoch) / 1000000 */
epoch = SetEpochTimestamp();
- /* try to avoid precision loss in subtraction */
if (timestamp < (PG_INT64_MAX + epoch))
- result = (timestamp - epoch) / 1000000.0;
+ result = numeric_div_opt_error(
+ int64_to_numeric(timestamp - epoch),
+ int64_to_numeric(1000000),
+ NULL);
else
- result = ((float8) timestamp - epoch) / 1000000.0;
+ result = numeric_div_opt_error(
+ numeric_sub_opt_error(int64_to_numeric(timestamp),
+ int64_to_numeric(epoch),
+ NULL),
+ int64_to_numeric(1000000),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
break;
default:
@@ -4699,7 +4743,7 @@ timestamp_part(PG_FUNCTION_ARGS)
result = 0;
}
- PG_RETURN_FLOAT8(result);
+ PG_RETURN_NUMERIC(result);
}
/* timestamptz_part()
@@ -4710,13 +4754,12 @@ timestamptz_part(PG_FUNCTION_ARGS)
{
text *units = PG_GETARG_TEXT_PP(0);
TimestampTz timestamp = PG_GETARG_TIMESTAMPTZ(1);
- float8 result;
+ Numeric result;
Timestamp epoch;
int tz;
int type,
val;
char *lowunits;
- double dummy;
fsec_t fsec;
struct pg_tm tt,
*tm = &tt;
@@ -4735,7 +4778,7 @@ timestamptz_part(PG_FUNCTION_ARGS)
TIMESTAMP_IS_NOBEGIN(timestamp),
true);
if (result)
- PG_RETURN_FLOAT8(result);
+ PG_RETURN_NUMERIC(result);
else
PG_RETURN_NULL();
}
@@ -4750,111 +4793,137 @@ timestamptz_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_TZ:
- result = -tz;
+ result = int64_to_numeric(-tz);
break;
case DTK_TZ_MINUTE:
- result = -tz;
- result /= MINS_PER_HOUR;
- FMODULO(result, dummy, (double) MINS_PER_HOUR);
+ /* trunc(-tz / 60) % 60 */
+ result = numeric_mod_opt_error(int64_to_numeric(-tz / SECS_PER_MINUTE),
+ int64_to_numeric(SECS_PER_MINUTE),
+ NULL);
break;
case DTK_TZ_HOUR:
- dummy = -tz;
- FMODULO(dummy, result, (double) SECS_PER_HOUR);
+ result = int64_to_numeric(-tz / SECS_PER_HOUR);
break;
case DTK_MICROSEC:
- result = tm->tm_sec * 1000000.0 + fsec;
+ result = int64_to_numeric(tm->tm_sec * 1000000 + fsec);
break;
case DTK_MILLISEC:
- result = tm->tm_sec * 1000.0 + fsec / 1000.0;
+ /* tm->tm_sec * 1000 + fsec / 1000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec * 1000),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(3)));
break;
case DTK_SECOND:
- result = tm->tm_sec + fsec / 1000000.0;
+ /* tm->tm_sec + fsec / 1000000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
break;
case DTK_MINUTE:
- result = tm->tm_min;
+ result = int64_to_numeric(tm->tm_min);
break;
case DTK_HOUR:
- result = tm->tm_hour;
+ result = int64_to_numeric(tm->tm_hour);
break;
case DTK_DAY:
- result = tm->tm_mday;
+ result = int64_to_numeric(tm->tm_mday);
break;
case DTK_MONTH:
- result = tm->tm_mon;
+ result = int64_to_numeric(tm->tm_mon);
break;
case DTK_QUARTER:
- result = (tm->tm_mon - 1) / 3 + 1;
+ result = int64_to_numeric((tm->tm_mon - 1) / 3 + 1);
break;
case DTK_WEEK:
- result = (float8) date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ result = int64_to_numeric(date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday));
break;
case DTK_YEAR:
if (tm->tm_year > 0)
- result = tm->tm_year;
+ result = int64_to_numeric(tm->tm_year);
else
/* there is no year 0, just 1 BC and 1 AD */
- result = tm->tm_year - 1;
+ result = int64_to_numeric(tm->tm_year - 1);
break;
case DTK_DECADE:
/* see comments in timestamp_part */
- if (tm->tm_year > 0)
- result = tm->tm_year / 10;
+ if (tm->tm_year >= 0)
+ result = int64_to_numeric(tm->tm_year / 10);
else
- result = -((8 - (tm->tm_year - 1)) / 10);
+ result = int64_to_numeric(-((8 - (tm->tm_year - 1)) / 10));
break;
case DTK_CENTURY:
/* see comments in timestamp_part */
if (tm->tm_year > 0)
- result = (tm->tm_year + 99) / 100;
+ result = int64_to_numeric((tm->tm_year + 99) / 100);
else
- result = -((99 - (tm->tm_year - 1)) / 100);
+ result = int64_to_numeric(-((99 - (tm->tm_year - 1)) / 100));
break;
case DTK_MILLENNIUM:
/* see comments in timestamp_part */
if (tm->tm_year > 0)
- result = (tm->tm_year + 999) / 1000;
+ result = int64_to_numeric((tm->tm_year + 999) / 1000);
else
- result = -((999 - (tm->tm_year - 1)) / 1000);
+ result = int64_to_numeric(-((999 - (tm->tm_year - 1)) / 1000));
break;
case DTK_JULIAN:
- result = date2j(tm->tm_year, tm->tm_mon, tm->tm_mday);
- result += ((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) +
- tm->tm_sec + (fsec / 1000000.0)) / (double) SECS_PER_DAY;
+ result = numeric_add_opt_error(
+ int64_to_numeric(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)),
+ numeric_div_opt_error(
+ int64_to_numeric(((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) + tm->tm_sec) * 1000000LL + fsec),
+ int64_to_numeric(SECS_PER_DAY * 1000000LL),
+ NULL),
+ NULL);
break;
case DTK_ISOYEAR:
- result = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ {
+ int tmp = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
/* Adjust BC years */
- if (result <= 0)
- result -= 1;
+ if (tmp <= 0)
+ tmp -= 1;
+ result = int64_to_numeric(tmp);
break;
+ }
case DTK_DOW:
case DTK_ISODOW:
- result = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
- if (val == DTK_ISODOW && result == 0)
- result = 7;
+ {
+ int tmp = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
+ if (val == DTK_ISODOW && tmp == 0)
+ tmp = 7;
+ result = int64_to_numeric(tmp);
break;
+ }
case DTK_DOY:
- result = (date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
- - date2j(tm->tm_year, 1, 1) + 1);
+ result = int64_to_numeric(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
+ - date2j(tm->tm_year, 1, 1) + 1);
break;
default:
@@ -4871,12 +4940,23 @@ timestamptz_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_EPOCH:
+ /* (timestamp - epoch) / 1000000 */
epoch = SetEpochTimestamp();
- /* try to avoid precision loss in subtraction */
if (timestamp < (PG_INT64_MAX + epoch))
- result = (timestamp - epoch) / 1000000.0;
+ result = numeric_div_opt_error(
+ int64_to_numeric(timestamp - epoch),
+ int64_to_numeric(1000000),
+ NULL);
else
- result = ((float8) timestamp - epoch) / 1000000.0;
+ result = numeric_div_opt_error(
+ numeric_sub_opt_error(int64_to_numeric(timestamp),
+ int64_to_numeric(epoch),
+ NULL),
+ int64_to_numeric(1000000),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
break;
default:
@@ -4897,7 +4977,7 @@ timestamptz_part(PG_FUNCTION_ARGS)
result = 0;
}
- PG_RETURN_FLOAT8(result);
+ PG_RETURN_NUMERIC(result);
}
@@ -4909,7 +4989,7 @@ interval_part(PG_FUNCTION_ARGS)
{
text *units = PG_GETARG_TEXT_PP(0);
Interval *interval = PG_GETARG_INTERVAL_P(1);
- float8 result;
+ Numeric result;
int type,
val;
char *lowunits;
@@ -4932,54 +5012,71 @@ interval_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_MICROSEC:
- result = tm->tm_sec * 1000000.0 + fsec;
+ /* tm->tm_sec * 1000000 + fsec */
+ result = int64_to_numeric(tm->tm_sec * 1000000 + fsec);
break;
case DTK_MILLISEC:
- result = tm->tm_sec * 1000.0 + fsec / 1000.0;
+ /* tm->tm_sec * 1000 + fsec / 1000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec * 1000),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(3)));
break;
case DTK_SECOND:
- result = tm->tm_sec + fsec / 1000000.0;
+ /* tm->tm_sec + fsec / 1000000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
break;
case DTK_MINUTE:
- result = tm->tm_min;
+ result = int64_to_numeric(tm->tm_min);
break;
case DTK_HOUR:
- result = tm->tm_hour;
+ result = int64_to_numeric(tm->tm_hour);
break;
case DTK_DAY:
- result = tm->tm_mday;
+ result = int64_to_numeric(tm->tm_mday);
break;
case DTK_MONTH:
- result = tm->tm_mon;
+ result = int64_to_numeric(tm->tm_mon);
break;
case DTK_QUARTER:
- result = (tm->tm_mon / 3) + 1;
+ result = int64_to_numeric((tm->tm_mon / 3) + 1);
break;
case DTK_YEAR:
- result = tm->tm_year;
+ result = int64_to_numeric(tm->tm_year);
break;
case DTK_DECADE:
/* caution: C division may have negative remainder */
- result = tm->tm_year / 10;
+ result = int64_to_numeric(tm->tm_year / 10);
break;
case DTK_CENTURY:
/* caution: C division may have negative remainder */
- result = tm->tm_year / 100;
+ result = int64_to_numeric(tm->tm_year / 100);
break;
case DTK_MILLENNIUM:
/* caution: C division may have negative remainder */
- result = tm->tm_year / 1000;
+ result = int64_to_numeric(tm->tm_year / 1000);
break;
default:
@@ -4999,10 +5096,18 @@ interval_part(PG_FUNCTION_ARGS)
}
else if (type == RESERV && val == DTK_EPOCH)
{
- result = interval->time / 1000000.0;
- result += ((double) DAYS_PER_YEAR * SECS_PER_DAY) * (interval->month / MONTHS_PER_YEAR);
- result += ((double) DAYS_PER_MONTH * SECS_PER_DAY) * (interval->month % MONTHS_PER_YEAR);
- result += ((double) SECS_PER_DAY) * interval->day;
+ result =
+ numeric_add_opt_error(
+ numeric_div_opt_error(int64_to_numeric(interval->time),
+ int64_to_numeric(1000000),
+ NULL),
+ int64_to_numeric(((int64) DAYS_PER_YEAR * SECS_PER_DAY) * (interval->month / MONTHS_PER_YEAR) +
+ ((int64) DAYS_PER_MONTH * SECS_PER_DAY) * (interval->month % MONTHS_PER_YEAR) +
+ ((int64) SECS_PER_DAY) * interval->day),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
}
else
{
@@ -5013,7 +5118,7 @@ interval_part(PG_FUNCTION_ARGS)
result = 0;
}
- PG_RETURN_FLOAT8(result);
+ PG_RETURN_NUMERIC(result);
}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 082a11f270..34fd06975c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -2315,10 +2315,10 @@
proname => 'interval_mi', prorettype => 'interval',
proargtypes => 'interval interval', prosrc => 'interval_mi' },
{ oid => '1171', descr => 'extract field from timestamp with time zone',
- proname => 'date_part', provolatile => 's', prorettype => 'float8',
+ proname => 'date_part', provolatile => 's', prorettype => 'numeric',
proargtypes => 'text timestamptz', prosrc => 'timestamptz_part' },
{ oid => '1172', descr => 'extract field from interval',
- proname => 'date_part', prorettype => 'float8',
+ proname => 'date_part', prorettype => 'numeric',
proargtypes => 'text interval', prosrc => 'interval_part' },
{ oid => '1174', descr => 'convert date to timestamp with time zone',
proname => 'timestamptz', provolatile => 's', prorettype => 'timestamptz',
@@ -2465,7 +2465,7 @@
proname => 'datetime_pl', prorettype => 'timestamp',
proargtypes => 'date time', prosrc => 'datetime_timestamp' },
{ oid => '1273', descr => 'extract field from time with time zone',
- proname => 'date_part', prorettype => 'float8', proargtypes => 'text timetz',
+ proname => 'date_part', prorettype => 'numeric', proargtypes => 'text timetz',
prosrc => 'timetz_part' },
{ oid => '1274',
proname => 'int84pl', prorettype => 'int8', proargtypes => 'int8 int4',
@@ -2812,11 +2812,10 @@
prosrc => 'textlen' },
{ oid => '1384', descr => 'extract field from date',
- proname => 'date_part', prolang => 'sql', prorettype => 'float8',
- proargtypes => 'text date',
- prosrc => 'select pg_catalog.date_part($1, cast($2 as timestamp without time zone))' },
+ proname => 'date_part', prorettype => 'numeric', proargtypes => 'text date',
+ prosrc => 'date_part' },
{ oid => '1385', descr => 'extract field from time',
- proname => 'date_part', prorettype => 'float8', proargtypes => 'text time',
+ proname => 'date_part', prorettype => 'numeric', proargtypes => 'text time',
prosrc => 'time_part' },
{ oid => '1386',
descr => 'date difference from today preserving months and years',
@@ -5721,7 +5720,7 @@
proname => 'date_trunc', prorettype => 'timestamp',
proargtypes => 'text timestamp', prosrc => 'timestamp_trunc' },
{ oid => '2021', descr => 'extract field from timestamp',
- proname => 'date_part', prorettype => 'float8',
+ proname => 'date_part', prorettype => 'numeric',
proargtypes => 'text timestamp', prosrc => 'timestamp_part' },
{ oid => '2024', descr => 'convert date to timestamp',
proname => 'timestamp', prorettype => 'timestamp', proargtypes => 'date',
diff --git a/src/test/regress/expected/date.out b/src/test/regress/expected/date.out
index d035fe1f1e..4d2115a4c6 100644
--- a/src/test/regress/expected/date.out
+++ b/src/test/regress/expected/date.out
@@ -974,13 +974,13 @@ SELECT EXTRACT(EPOCH FROM DATE '1970-01-01'); -- 0
SELECT EXTRACT(EPOCH FROM TIMESTAMP '1970-01-01'); -- 0
date_part
-----------
- 0
+ 0.000000
(1 row)
SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00'); -- 0
date_part
-----------
- 0
+ 0.000000
(1 row)
--
@@ -1157,35 +1157,15 @@ SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
-- all possible fields
--
SELECT EXTRACT(MICROSECONDS FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "microseconds" not supported
SELECT EXTRACT(MILLISECONDS FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "milliseconds" not supported
SELECT EXTRACT(SECOND FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "second" not supported
SELECT EXTRACT(MINUTE FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "minute" not supported
SELECT EXTRACT(HOUR FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "hour" not supported
SELECT EXTRACT(DAY FROM DATE '2020-08-11');
date_part
-----------
@@ -1259,14 +1239,11 @@ SELECT EXTRACT(DOY FROM DATE '2020-08-11');
(1 row)
SELECT EXTRACT(TIMEZONE FROM DATE '2020-08-11');
-ERROR: timestamp units "timezone" not supported
-CONTEXT: SQL function "date_part" statement 1
+ERROR: date units "timezone" not supported
SELECT EXTRACT(TIMEZONE_M FROM DATE '2020-08-11');
-ERROR: timestamp units "timezone_m" not supported
-CONTEXT: SQL function "date_part" statement 1
+ERROR: date units "timezone_m" not supported
SELECT EXTRACT(TIMEZONE_H FROM DATE '2020-08-11');
-ERROR: timestamp units "timezone_h" not supported
-CONTEXT: SQL function "date_part" statement 1
+ERROR: date units "timezone_h" not supported
SELECT EXTRACT(EPOCH FROM DATE '2020-08-11');
date_part
------------
@@ -1372,13 +1349,13 @@ select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'
--
-- oscillating fields from non-finite date/timestamptz:
--
-SELECT EXTRACT(HOUR FROM DATE 'infinity'); -- NULL
+SELECT EXTRACT(DAY FROM DATE 'infinity'); -- NULL
date_part
-----------
(1 row)
-SELECT EXTRACT(HOUR FROM DATE '-infinity'); -- NULL
+SELECT EXTRACT(DAY FROM DATE '-infinity'); -- NULL
date_part
-----------
@@ -1410,35 +1387,15 @@ SELECT EXTRACT(HOUR FROM TIMESTAMPTZ '-infinity'); -- NULL
-- all possible fields
SELECT EXTRACT(MICROSECONDS FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "microseconds" not supported
SELECT EXTRACT(MILLISECONDS FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "milliseconds" not supported
SELECT EXTRACT(SECOND FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "second" not supported
SELECT EXTRACT(MINUTE FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "minute" not supported
SELECT EXTRACT(HOUR FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "hour" not supported
SELECT EXTRACT(DAY FROM DATE 'infinity'); -- NULL
date_part
-----------
@@ -1482,23 +1439,11 @@ SELECT EXTRACT(DOY FROM DATE 'infinity'); -- NULL
(1 row)
SELECT EXTRACT(TIMEZONE FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "timezone" not supported
SELECT EXTRACT(TIMEZONE_M FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "timezone_m" not supported
SELECT EXTRACT(TIMEZONE_H FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "timezone_h" not supported
--
-- monotonic fields from non-finite date/timestamptz:
--
@@ -1584,9 +1529,8 @@ SELECT EXTRACT(EPOCH FROM DATE 'infinity'); -- Infinity
--
-- wrong fields from non-finite date:
--
-SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- ERROR: timestamp units "microsec" not recognized
-ERROR: timestamp units "microsec" not recognized
-CONTEXT: SQL function "date_part" statement 1
+SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- ERROR: date units "microsec" not recognized
+ERROR: date units "microsec" not recognized
-- test constructors
select make_date(2013, 7, 15);
make_date
diff --git a/src/test/regress/expected/interval.out b/src/test/regress/expected/interval.out
index fde4be5271..274a3714cf 100644
--- a/src/test/regress/expected/interval.out
+++ b/src/test/regress/expected/interval.out
@@ -948,18 +948,18 @@ SELECT f1,
EXTRACT(MILLENNIUM FROM f1) AS MILLENNIUM,
EXTRACT(EPOCH FROM f1) AS EPOCH
FROM INTERVAL_TBL;
- f1 | microsecond | millisecond | second | minute | hour | day | month | quarter | year | decade | century | millennium | epoch
--------------------------------+-------------+-------------+--------+--------+------+-----+-------+---------+------+--------+---------+------------+------------
- @ 1 min | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 60
- @ 5 hours | 0 | 0 | 0 | 0 | 5 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 18000
- @ 10 days | 0 | 0 | 0 | 0 | 0 | 10 | 0 | 1 | 0 | 0 | 0 | 0 | 864000
- @ 34 years | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 34 | 3 | 0 | 0 | 1072958400
- @ 3 mons | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 2 | 0 | 0 | 0 | 0 | 7776000
- @ 14 secs ago | -14000000 | -14000 | -14 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | -14
- @ 1 day 2 hours 3 mins 4 secs | 4000000 | 4000 | 4 | 3 | 2 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 93784
- @ 6 years | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 6 | 0 | 0 | 0 | 189345600
- @ 5 mons | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 12960000
- @ 5 mons 12 hours | 0 | 0 | 0 | 0 | 12 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 13003200
+ f1 | microsecond | millisecond | second | minute | hour | day | month | quarter | year | decade | century | millennium | epoch
+-------------------------------+-------------+-------------+------------+--------+------+-----+-------+---------+------+--------+---------+------------+-------------------
+ @ 1 min | 0 | 0.000 | 0.000000 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 60.000000
+ @ 5 hours | 0 | 0.000 | 0.000000 | 0 | 5 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 18000.000000
+ @ 10 days | 0 | 0.000 | 0.000000 | 0 | 0 | 10 | 0 | 1 | 0 | 0 | 0 | 0 | 864000.000000
+ @ 34 years | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 0 | 1 | 34 | 3 | 0 | 0 | 1072224000.000000
+ @ 3 mons | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 3 | 2 | 0 | 0 | 0 | 0 | 7776000.000000
+ @ 14 secs ago | -14000000 | -14000.000 | -14.000000 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | -14.000000
+ @ 1 day 2 hours 3 mins 4 secs | 4000000 | 4000.000 | 4.000000 | 3 | 2 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 93784.000000
+ @ 6 years | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 0 | 1 | 6 | 0 | 0 | 0 | 189216000.000000
+ @ 5 mons | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 12960000.000000
+ @ 5 mons 12 hours | 0 | 0.000 | 0.000000 | 0 | 12 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 13003200.000000
(10 rows)
SELECT EXTRACT(FORTNIGHT FROM INTERVAL '2 days'); -- error
diff --git a/src/test/regress/expected/timestamp.out b/src/test/regress/expected/timestamp.out
index 5f97505a30..1c3f0db1d1 100644
--- a/src/test/regress/expected/timestamp.out
+++ b/src/test/regress/expected/timestamp.out
@@ -615,146 +615,146 @@ SELECT d1 as "timestamp",
date_part( 'day', d1) AS day, date_part( 'hour', d1) AS hour,
date_part( 'minute', d1) AS minute, date_part( 'second', d1) AS second
FROM TIMESTAMP_TBL;
- timestamp | year | month | day | hour | minute | second
------------------------------+-----------+-------+-----+------+--------+--------
- -infinity | -Infinity | | | | |
- infinity | Infinity | | | | |
- Thu Jan 01 00:00:00 1970 | 1970 | 1 | 1 | 0 | 0 | 0
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:02 1997 | 1997 | 2 | 10 | 17 | 32 | 2
- Mon Feb 10 17:32:01.4 1997 | 1997 | 2 | 10 | 17 | 32 | 1.4
- Mon Feb 10 17:32:01.5 1997 | 1997 | 2 | 10 | 17 | 32 | 1.5
- Mon Feb 10 17:32:01.6 1997 | 1997 | 2 | 10 | 17 | 32 | 1.6
- Thu Jan 02 00:00:00 1997 | 1997 | 1 | 2 | 0 | 0 | 0
- Thu Jan 02 03:04:05 1997 | 1997 | 1 | 2 | 3 | 4 | 5
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Tue Jun 10 17:32:01 1997 | 1997 | 6 | 10 | 17 | 32 | 1
- Sat Sep 22 18:19:20 2001 | 2001 | 9 | 22 | 18 | 19 | 20
- Wed Mar 15 08:14:01 2000 | 2000 | 3 | 15 | 8 | 14 | 1
- Wed Mar 15 13:14:02 2000 | 2000 | 3 | 15 | 13 | 14 | 2
- Wed Mar 15 12:14:03 2000 | 2000 | 3 | 15 | 12 | 14 | 3
- Wed Mar 15 03:14:04 2000 | 2000 | 3 | 15 | 3 | 14 | 4
- Wed Mar 15 02:14:05 2000 | 2000 | 3 | 15 | 2 | 14 | 5
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:00 1997 | 1997 | 2 | 10 | 17 | 32 | 0
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Tue Jun 10 18:32:01 1997 | 1997 | 6 | 10 | 18 | 32 | 1
- Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1
- Tue Feb 11 17:32:01 1997 | 1997 | 2 | 11 | 17 | 32 | 1
- Wed Feb 12 17:32:01 1997 | 1997 | 2 | 12 | 17 | 32 | 1
- Thu Feb 13 17:32:01 1997 | 1997 | 2 | 13 | 17 | 32 | 1
- Fri Feb 14 17:32:01 1997 | 1997 | 2 | 14 | 17 | 32 | 1
- Sat Feb 15 17:32:01 1997 | 1997 | 2 | 15 | 17 | 32 | 1
- Sun Feb 16 17:32:01 1997 | 1997 | 2 | 16 | 17 | 32 | 1
- Tue Feb 16 17:32:01 0097 BC | -97 | 2 | 16 | 17 | 32 | 1
- Sat Feb 16 17:32:01 0097 | 97 | 2 | 16 | 17 | 32 | 1
- Thu Feb 16 17:32:01 0597 | 597 | 2 | 16 | 17 | 32 | 1
- Tue Feb 16 17:32:01 1097 | 1097 | 2 | 16 | 17 | 32 | 1
- Sat Feb 16 17:32:01 1697 | 1697 | 2 | 16 | 17 | 32 | 1
- Thu Feb 16 17:32:01 1797 | 1797 | 2 | 16 | 17 | 32 | 1
- Tue Feb 16 17:32:01 1897 | 1897 | 2 | 16 | 17 | 32 | 1
- Sun Feb 16 17:32:01 1997 | 1997 | 2 | 16 | 17 | 32 | 1
- Sat Feb 16 17:32:01 2097 | 2097 | 2 | 16 | 17 | 32 | 1
- Wed Feb 28 17:32:01 1996 | 1996 | 2 | 28 | 17 | 32 | 1
- Thu Feb 29 17:32:01 1996 | 1996 | 2 | 29 | 17 | 32 | 1
- Fri Mar 01 17:32:01 1996 | 1996 | 3 | 1 | 17 | 32 | 1
- Mon Dec 30 17:32:01 1996 | 1996 | 12 | 30 | 17 | 32 | 1
- Tue Dec 31 17:32:01 1996 | 1996 | 12 | 31 | 17 | 32 | 1
- Wed Jan 01 17:32:01 1997 | 1997 | 1 | 1 | 17 | 32 | 1
- Fri Feb 28 17:32:01 1997 | 1997 | 2 | 28 | 17 | 32 | 1
- Sat Mar 01 17:32:01 1997 | 1997 | 3 | 1 | 17 | 32 | 1
- Tue Dec 30 17:32:01 1997 | 1997 | 12 | 30 | 17 | 32 | 1
- Wed Dec 31 17:32:01 1997 | 1997 | 12 | 31 | 17 | 32 | 1
- Fri Dec 31 17:32:01 1999 | 1999 | 12 | 31 | 17 | 32 | 1
- Sat Jan 01 17:32:01 2000 | 2000 | 1 | 1 | 17 | 32 | 1
- Sun Dec 31 17:32:01 2000 | 2000 | 12 | 31 | 17 | 32 | 1
- Mon Jan 01 17:32:01 2001 | 2001 | 1 | 1 | 17 | 32 | 1
+ timestamp | year | month | day | hour | minute | second
+-----------------------------+-----------+-------+-----+------+--------+-----------
+ -infinity | -Infinity | | | | |
+ infinity | Infinity | | | | |
+ Thu Jan 01 00:00:00 1970 | 1970 | 1 | 1 | 0 | 0 | 0.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:02 1997 | 1997 | 2 | 10 | 17 | 32 | 2.000000
+ Mon Feb 10 17:32:01.4 1997 | 1997 | 2 | 10 | 17 | 32 | 1.400000
+ Mon Feb 10 17:32:01.5 1997 | 1997 | 2 | 10 | 17 | 32 | 1.500000
+ Mon Feb 10 17:32:01.6 1997 | 1997 | 2 | 10 | 17 | 32 | 1.600000
+ Thu Jan 02 00:00:00 1997 | 1997 | 1 | 2 | 0 | 0 | 0.000000
+ Thu Jan 02 03:04:05 1997 | 1997 | 1 | 2 | 3 | 4 | 5.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Tue Jun 10 17:32:01 1997 | 1997 | 6 | 10 | 17 | 32 | 1.000000
+ Sat Sep 22 18:19:20 2001 | 2001 | 9 | 22 | 18 | 19 | 20.000000
+ Wed Mar 15 08:14:01 2000 | 2000 | 3 | 15 | 8 | 14 | 1.000000
+ Wed Mar 15 13:14:02 2000 | 2000 | 3 | 15 | 13 | 14 | 2.000000
+ Wed Mar 15 12:14:03 2000 | 2000 | 3 | 15 | 12 | 14 | 3.000000
+ Wed Mar 15 03:14:04 2000 | 2000 | 3 | 15 | 3 | 14 | 4.000000
+ Wed Mar 15 02:14:05 2000 | 2000 | 3 | 15 | 2 | 14 | 5.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:00 1997 | 1997 | 2 | 10 | 17 | 32 | 0.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Tue Jun 10 18:32:01 1997 | 1997 | 6 | 10 | 18 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Tue Feb 11 17:32:01 1997 | 1997 | 2 | 11 | 17 | 32 | 1.000000
+ Wed Feb 12 17:32:01 1997 | 1997 | 2 | 12 | 17 | 32 | 1.000000
+ Thu Feb 13 17:32:01 1997 | 1997 | 2 | 13 | 17 | 32 | 1.000000
+ Fri Feb 14 17:32:01 1997 | 1997 | 2 | 14 | 17 | 32 | 1.000000
+ Sat Feb 15 17:32:01 1997 | 1997 | 2 | 15 | 17 | 32 | 1.000000
+ Sun Feb 16 17:32:01 1997 | 1997 | 2 | 16 | 17 | 32 | 1.000000
+ Tue Feb 16 17:32:01 0097 BC | -97 | 2 | 16 | 17 | 32 | 1.000000
+ Sat Feb 16 17:32:01 0097 | 97 | 2 | 16 | 17 | 32 | 1.000000
+ Thu Feb 16 17:32:01 0597 | 597 | 2 | 16 | 17 | 32 | 1.000000
+ Tue Feb 16 17:32:01 1097 | 1097 | 2 | 16 | 17 | 32 | 1.000000
+ Sat Feb 16 17:32:01 1697 | 1697 | 2 | 16 | 17 | 32 | 1.000000
+ Thu Feb 16 17:32:01 1797 | 1797 | 2 | 16 | 17 | 32 | 1.000000
+ Tue Feb 16 17:32:01 1897 | 1897 | 2 | 16 | 17 | 32 | 1.000000
+ Sun Feb 16 17:32:01 1997 | 1997 | 2 | 16 | 17 | 32 | 1.000000
+ Sat Feb 16 17:32:01 2097 | 2097 | 2 | 16 | 17 | 32 | 1.000000
+ Wed Feb 28 17:32:01 1996 | 1996 | 2 | 28 | 17 | 32 | 1.000000
+ Thu Feb 29 17:32:01 1996 | 1996 | 2 | 29 | 17 | 32 | 1.000000
+ Fri Mar 01 17:32:01 1996 | 1996 | 3 | 1 | 17 | 32 | 1.000000
+ Mon Dec 30 17:32:01 1996 | 1996 | 12 | 30 | 17 | 32 | 1.000000
+ Tue Dec 31 17:32:01 1996 | 1996 | 12 | 31 | 17 | 32 | 1.000000
+ Wed Jan 01 17:32:01 1997 | 1997 | 1 | 1 | 17 | 32 | 1.000000
+ Fri Feb 28 17:32:01 1997 | 1997 | 2 | 28 | 17 | 32 | 1.000000
+ Sat Mar 01 17:32:01 1997 | 1997 | 3 | 1 | 17 | 32 | 1.000000
+ Tue Dec 30 17:32:01 1997 | 1997 | 12 | 30 | 17 | 32 | 1.000000
+ Wed Dec 31 17:32:01 1997 | 1997 | 12 | 31 | 17 | 32 | 1.000000
+ Fri Dec 31 17:32:01 1999 | 1999 | 12 | 31 | 17 | 32 | 1.000000
+ Sat Jan 01 17:32:01 2000 | 2000 | 1 | 1 | 17 | 32 | 1.000000
+ Sun Dec 31 17:32:01 2000 | 2000 | 12 | 31 | 17 | 32 | 1.000000
+ Mon Jan 01 17:32:01 2001 | 2001 | 1 | 1 | 17 | 32 | 1.000000
(65 rows)
SELECT d1 as "timestamp",
date_part( 'quarter', d1) AS quarter, date_part( 'msec', d1) AS msec,
date_part( 'usec', d1) AS usec
FROM TIMESTAMP_TBL;
- timestamp | quarter | msec | usec
------------------------------+---------+-------+----------
- -infinity | | |
- infinity | | |
- Thu Jan 01 00:00:00 1970 | 1 | 0 | 0
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:02 1997 | 1 | 2000 | 2000000
- Mon Feb 10 17:32:01.4 1997 | 1 | 1400 | 1400000
- Mon Feb 10 17:32:01.5 1997 | 1 | 1500 | 1500000
- Mon Feb 10 17:32:01.6 1997 | 1 | 1600 | 1600000
- Thu Jan 02 00:00:00 1997 | 1 | 0 | 0
- Thu Jan 02 03:04:05 1997 | 1 | 5000 | 5000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Tue Jun 10 17:32:01 1997 | 2 | 1000 | 1000000
- Sat Sep 22 18:19:20 2001 | 3 | 20000 | 20000000
- Wed Mar 15 08:14:01 2000 | 1 | 1000 | 1000000
- Wed Mar 15 13:14:02 2000 | 1 | 2000 | 2000000
- Wed Mar 15 12:14:03 2000 | 1 | 3000 | 3000000
- Wed Mar 15 03:14:04 2000 | 1 | 4000 | 4000000
- Wed Mar 15 02:14:05 2000 | 1 | 5000 | 5000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:00 1997 | 1 | 0 | 0
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Tue Jun 10 18:32:01 1997 | 2 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 | 1 | 1000 | 1000000
- Tue Feb 11 17:32:01 1997 | 1 | 1000 | 1000000
- Wed Feb 12 17:32:01 1997 | 1 | 1000 | 1000000
- Thu Feb 13 17:32:01 1997 | 1 | 1000 | 1000000
- Fri Feb 14 17:32:01 1997 | 1 | 1000 | 1000000
- Sat Feb 15 17:32:01 1997 | 1 | 1000 | 1000000
- Sun Feb 16 17:32:01 1997 | 1 | 1000 | 1000000
- Tue Feb 16 17:32:01 0097 BC | 1 | 1000 | 1000000
- Sat Feb 16 17:32:01 0097 | 1 | 1000 | 1000000
- Thu Feb 16 17:32:01 0597 | 1 | 1000 | 1000000
- Tue Feb 16 17:32:01 1097 | 1 | 1000 | 1000000
- Sat Feb 16 17:32:01 1697 | 1 | 1000 | 1000000
- Thu Feb 16 17:32:01 1797 | 1 | 1000 | 1000000
- Tue Feb 16 17:32:01 1897 | 1 | 1000 | 1000000
- Sun Feb 16 17:32:01 1997 | 1 | 1000 | 1000000
- Sat Feb 16 17:32:01 2097 | 1 | 1000 | 1000000
- Wed Feb 28 17:32:01 1996 | 1 | 1000 | 1000000
- Thu Feb 29 17:32:01 1996 | 1 | 1000 | 1000000
- Fri Mar 01 17:32:01 1996 | 1 | 1000 | 1000000
- Mon Dec 30 17:32:01 1996 | 4 | 1000 | 1000000
- Tue Dec 31 17:32:01 1996 | 4 | 1000 | 1000000
- Wed Jan 01 17:32:01 1997 | 1 | 1000 | 1000000
- Fri Feb 28 17:32:01 1997 | 1 | 1000 | 1000000
- Sat Mar 01 17:32:01 1997 | 1 | 1000 | 1000000
- Tue Dec 30 17:32:01 1997 | 4 | 1000 | 1000000
- Wed Dec 31 17:32:01 1997 | 4 | 1000 | 1000000
- Fri Dec 31 17:32:01 1999 | 4 | 1000 | 1000000
- Sat Jan 01 17:32:01 2000 | 1 | 1000 | 1000000
- Sun Dec 31 17:32:01 2000 | 4 | 1000 | 1000000
- Mon Jan 01 17:32:01 2001 | 1 | 1000 | 1000000
+ timestamp | quarter | msec | usec
+-----------------------------+---------+-----------+----------
+ -infinity | | |
+ infinity | | |
+ Thu Jan 01 00:00:00 1970 | 1 | 0.000 | 0
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:02 1997 | 1 | 2000.000 | 2000000
+ Mon Feb 10 17:32:01.4 1997 | 1 | 1400.000 | 1400000
+ Mon Feb 10 17:32:01.5 1997 | 1 | 1500.000 | 1500000
+ Mon Feb 10 17:32:01.6 1997 | 1 | 1600.000 | 1600000
+ Thu Jan 02 00:00:00 1997 | 1 | 0.000 | 0
+ Thu Jan 02 03:04:05 1997 | 1 | 5000.000 | 5000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Tue Jun 10 17:32:01 1997 | 2 | 1000.000 | 1000000
+ Sat Sep 22 18:19:20 2001 | 3 | 20000.000 | 20000000
+ Wed Mar 15 08:14:01 2000 | 1 | 1000.000 | 1000000
+ Wed Mar 15 13:14:02 2000 | 1 | 2000.000 | 2000000
+ Wed Mar 15 12:14:03 2000 | 1 | 3000.000 | 3000000
+ Wed Mar 15 03:14:04 2000 | 1 | 4000.000 | 4000000
+ Wed Mar 15 02:14:05 2000 | 1 | 5000.000 | 5000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:00 1997 | 1 | 0.000 | 0
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Tue Jun 10 18:32:01 1997 | 2 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Tue Feb 11 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Wed Feb 12 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Thu Feb 13 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Fri Feb 14 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Sat Feb 15 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Sun Feb 16 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Tue Feb 16 17:32:01 0097 BC | 1 | 1000.000 | 1000000
+ Sat Feb 16 17:32:01 0097 | 1 | 1000.000 | 1000000
+ Thu Feb 16 17:32:01 0597 | 1 | 1000.000 | 1000000
+ Tue Feb 16 17:32:01 1097 | 1 | 1000.000 | 1000000
+ Sat Feb 16 17:32:01 1697 | 1 | 1000.000 | 1000000
+ Thu Feb 16 17:32:01 1797 | 1 | 1000.000 | 1000000
+ Tue Feb 16 17:32:01 1897 | 1 | 1000.000 | 1000000
+ Sun Feb 16 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Sat Feb 16 17:32:01 2097 | 1 | 1000.000 | 1000000
+ Wed Feb 28 17:32:01 1996 | 1 | 1000.000 | 1000000
+ Thu Feb 29 17:32:01 1996 | 1 | 1000.000 | 1000000
+ Fri Mar 01 17:32:01 1996 | 1 | 1000.000 | 1000000
+ Mon Dec 30 17:32:01 1996 | 4 | 1000.000 | 1000000
+ Tue Dec 31 17:32:01 1996 | 4 | 1000.000 | 1000000
+ Wed Jan 01 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Fri Feb 28 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Sat Mar 01 17:32:01 1997 | 1 | 1000.000 | 1000000
+ Tue Dec 30 17:32:01 1997 | 4 | 1000.000 | 1000000
+ Wed Dec 31 17:32:01 1997 | 4 | 1000.000 | 1000000
+ Fri Dec 31 17:32:01 1999 | 4 | 1000.000 | 1000000
+ Sat Jan 01 17:32:01 2000 | 1 | 1000.000 | 1000000
+ Sun Dec 31 17:32:01 2000 | 4 | 1000.000 | 1000000
+ Mon Jan 01 17:32:01 2001 | 1 | 1000.000 | 1000000
(65 rows)
SELECT d1 as "timestamp",
diff --git a/src/test/regress/expected/timestamptz.out b/src/test/regress/expected/timestamptz.out
index 639b50308e..2420d433de 100644
--- a/src/test/regress/expected/timestamptz.out
+++ b/src/test/regress/expected/timestamptz.out
@@ -733,148 +733,148 @@ SELECT d1 as timestamptz,
date_part( 'day', d1) AS day, date_part( 'hour', d1) AS hour,
date_part( 'minute', d1) AS minute, date_part( 'second', d1) AS second
FROM TIMESTAMPTZ_TBL;
- timestamptz | year | month | day | hour | minute | second
----------------------------------+-----------+-------+-----+------+--------+--------
- -infinity | -Infinity | | | | |
- infinity | Infinity | | | | |
- Wed Dec 31 16:00:00 1969 PST | 1969 | 12 | 31 | 16 | 0 | 0
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:02 1997 PST | 1997 | 2 | 10 | 17 | 32 | 2
- Mon Feb 10 17:32:01.4 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.4
- Mon Feb 10 17:32:01.5 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.5
- Mon Feb 10 17:32:01.6 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.6
- Thu Jan 02 00:00:00 1997 PST | 1997 | 1 | 2 | 0 | 0 | 0
- Thu Jan 02 03:04:05 1997 PST | 1997 | 1 | 2 | 3 | 4 | 5
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Tue Jun 10 17:32:01 1997 PDT | 1997 | 6 | 10 | 17 | 32 | 1
- Sat Sep 22 18:19:20 2001 PDT | 2001 | 9 | 22 | 18 | 19 | 20
- Wed Mar 15 08:14:01 2000 PST | 2000 | 3 | 15 | 8 | 14 | 1
- Wed Mar 15 04:14:02 2000 PST | 2000 | 3 | 15 | 4 | 14 | 2
- Wed Mar 15 02:14:03 2000 PST | 2000 | 3 | 15 | 2 | 14 | 3
- Wed Mar 15 03:14:04 2000 PST | 2000 | 3 | 15 | 3 | 14 | 4
- Wed Mar 15 01:14:05 2000 PST | 2000 | 3 | 15 | 1 | 14 | 5
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:00 1997 PST | 1997 | 2 | 10 | 17 | 32 | 0
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Mon Feb 10 09:32:01 1997 PST | 1997 | 2 | 10 | 9 | 32 | 1
- Mon Feb 10 09:32:01 1997 PST | 1997 | 2 | 10 | 9 | 32 | 1
- Mon Feb 10 09:32:01 1997 PST | 1997 | 2 | 10 | 9 | 32 | 1
- Mon Feb 10 14:32:01 1997 PST | 1997 | 2 | 10 | 14 | 32 | 1
- Thu Jul 10 14:32:01 1997 PDT | 1997 | 7 | 10 | 14 | 32 | 1
- Tue Jun 10 18:32:01 1997 PDT | 1997 | 6 | 10 | 18 | 32 | 1
- Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1
- Tue Feb 11 17:32:01 1997 PST | 1997 | 2 | 11 | 17 | 32 | 1
- Wed Feb 12 17:32:01 1997 PST | 1997 | 2 | 12 | 17 | 32 | 1
- Thu Feb 13 17:32:01 1997 PST | 1997 | 2 | 13 | 17 | 32 | 1
- Fri Feb 14 17:32:01 1997 PST | 1997 | 2 | 14 | 17 | 32 | 1
- Sat Feb 15 17:32:01 1997 PST | 1997 | 2 | 15 | 17 | 32 | 1
- Sun Feb 16 17:32:01 1997 PST | 1997 | 2 | 16 | 17 | 32 | 1
- Tue Feb 16 17:32:01 0097 PST BC | -97 | 2 | 16 | 17 | 32 | 1
- Sat Feb 16 17:32:01 0097 PST | 97 | 2 | 16 | 17 | 32 | 1
- Thu Feb 16 17:32:01 0597 PST | 597 | 2 | 16 | 17 | 32 | 1
- Tue Feb 16 17:32:01 1097 PST | 1097 | 2 | 16 | 17 | 32 | 1
- Sat Feb 16 17:32:01 1697 PST | 1697 | 2 | 16 | 17 | 32 | 1
- Thu Feb 16 17:32:01 1797 PST | 1797 | 2 | 16 | 17 | 32 | 1
- Tue Feb 16 17:32:01 1897 PST | 1897 | 2 | 16 | 17 | 32 | 1
- Sun Feb 16 17:32:01 1997 PST | 1997 | 2 | 16 | 17 | 32 | 1
- Sat Feb 16 17:32:01 2097 PST | 2097 | 2 | 16 | 17 | 32 | 1
- Wed Feb 28 17:32:01 1996 PST | 1996 | 2 | 28 | 17 | 32 | 1
- Thu Feb 29 17:32:01 1996 PST | 1996 | 2 | 29 | 17 | 32 | 1
- Fri Mar 01 17:32:01 1996 PST | 1996 | 3 | 1 | 17 | 32 | 1
- Mon Dec 30 17:32:01 1996 PST | 1996 | 12 | 30 | 17 | 32 | 1
- Tue Dec 31 17:32:01 1996 PST | 1996 | 12 | 31 | 17 | 32 | 1
- Wed Jan 01 17:32:01 1997 PST | 1997 | 1 | 1 | 17 | 32 | 1
- Fri Feb 28 17:32:01 1997 PST | 1997 | 2 | 28 | 17 | 32 | 1
- Sat Mar 01 17:32:01 1997 PST | 1997 | 3 | 1 | 17 | 32 | 1
- Tue Dec 30 17:32:01 1997 PST | 1997 | 12 | 30 | 17 | 32 | 1
- Wed Dec 31 17:32:01 1997 PST | 1997 | 12 | 31 | 17 | 32 | 1
- Fri Dec 31 17:32:01 1999 PST | 1999 | 12 | 31 | 17 | 32 | 1
- Sat Jan 01 17:32:01 2000 PST | 2000 | 1 | 1 | 17 | 32 | 1
- Sun Dec 31 17:32:01 2000 PST | 2000 | 12 | 31 | 17 | 32 | 1
- Mon Jan 01 17:32:01 2001 PST | 2001 | 1 | 1 | 17 | 32 | 1
+ timestamptz | year | month | day | hour | minute | second
+---------------------------------+-----------+-------+-----+------+--------+-----------
+ -infinity | -Infinity | | | | |
+ infinity | Infinity | | | | |
+ Wed Dec 31 16:00:00 1969 PST | 1969 | 12 | 31 | 16 | 0 | 0.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:02 1997 PST | 1997 | 2 | 10 | 17 | 32 | 2.000000
+ Mon Feb 10 17:32:01.4 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.400000
+ Mon Feb 10 17:32:01.5 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.500000
+ Mon Feb 10 17:32:01.6 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.600000
+ Thu Jan 02 00:00:00 1997 PST | 1997 | 1 | 2 | 0 | 0 | 0.000000
+ Thu Jan 02 03:04:05 1997 PST | 1997 | 1 | 2 | 3 | 4 | 5.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Tue Jun 10 17:32:01 1997 PDT | 1997 | 6 | 10 | 17 | 32 | 1.000000
+ Sat Sep 22 18:19:20 2001 PDT | 2001 | 9 | 22 | 18 | 19 | 20.000000
+ Wed Mar 15 08:14:01 2000 PST | 2000 | 3 | 15 | 8 | 14 | 1.000000
+ Wed Mar 15 04:14:02 2000 PST | 2000 | 3 | 15 | 4 | 14 | 2.000000
+ Wed Mar 15 02:14:03 2000 PST | 2000 | 3 | 15 | 2 | 14 | 3.000000
+ Wed Mar 15 03:14:04 2000 PST | 2000 | 3 | 15 | 3 | 14 | 4.000000
+ Wed Mar 15 01:14:05 2000 PST | 2000 | 3 | 15 | 1 | 14 | 5.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:00 1997 PST | 1997 | 2 | 10 | 17 | 32 | 0.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Mon Feb 10 09:32:01 1997 PST | 1997 | 2 | 10 | 9 | 32 | 1.000000
+ Mon Feb 10 09:32:01 1997 PST | 1997 | 2 | 10 | 9 | 32 | 1.000000
+ Mon Feb 10 09:32:01 1997 PST | 1997 | 2 | 10 | 9 | 32 | 1.000000
+ Mon Feb 10 14:32:01 1997 PST | 1997 | 2 | 10 | 14 | 32 | 1.000000
+ Thu Jul 10 14:32:01 1997 PDT | 1997 | 7 | 10 | 14 | 32 | 1.000000
+ Tue Jun 10 18:32:01 1997 PDT | 1997 | 6 | 10 | 18 | 32 | 1.000000
+ Mon Feb 10 17:32:01 1997 PST | 1997 | 2 | 10 | 17 | 32 | 1.000000
+ Tue Feb 11 17:32:01 1997 PST | 1997 | 2 | 11 | 17 | 32 | 1.000000
+ Wed Feb 12 17:32:01 1997 PST | 1997 | 2 | 12 | 17 | 32 | 1.000000
+ Thu Feb 13 17:32:01 1997 PST | 1997 | 2 | 13 | 17 | 32 | 1.000000
+ Fri Feb 14 17:32:01 1997 PST | 1997 | 2 | 14 | 17 | 32 | 1.000000
+ Sat Feb 15 17:32:01 1997 PST | 1997 | 2 | 15 | 17 | 32 | 1.000000
+ Sun Feb 16 17:32:01 1997 PST | 1997 | 2 | 16 | 17 | 32 | 1.000000
+ Tue Feb 16 17:32:01 0097 PST BC | -97 | 2 | 16 | 17 | 32 | 1.000000
+ Sat Feb 16 17:32:01 0097 PST | 97 | 2 | 16 | 17 | 32 | 1.000000
+ Thu Feb 16 17:32:01 0597 PST | 597 | 2 | 16 | 17 | 32 | 1.000000
+ Tue Feb 16 17:32:01 1097 PST | 1097 | 2 | 16 | 17 | 32 | 1.000000
+ Sat Feb 16 17:32:01 1697 PST | 1697 | 2 | 16 | 17 | 32 | 1.000000
+ Thu Feb 16 17:32:01 1797 PST | 1797 | 2 | 16 | 17 | 32 | 1.000000
+ Tue Feb 16 17:32:01 1897 PST | 1897 | 2 | 16 | 17 | 32 | 1.000000
+ Sun Feb 16 17:32:01 1997 PST | 1997 | 2 | 16 | 17 | 32 | 1.000000
+ Sat Feb 16 17:32:01 2097 PST | 2097 | 2 | 16 | 17 | 32 | 1.000000
+ Wed Feb 28 17:32:01 1996 PST | 1996 | 2 | 28 | 17 | 32 | 1.000000
+ Thu Feb 29 17:32:01 1996 PST | 1996 | 2 | 29 | 17 | 32 | 1.000000
+ Fri Mar 01 17:32:01 1996 PST | 1996 | 3 | 1 | 17 | 32 | 1.000000
+ Mon Dec 30 17:32:01 1996 PST | 1996 | 12 | 30 | 17 | 32 | 1.000000
+ Tue Dec 31 17:32:01 1996 PST | 1996 | 12 | 31 | 17 | 32 | 1.000000
+ Wed Jan 01 17:32:01 1997 PST | 1997 | 1 | 1 | 17 | 32 | 1.000000
+ Fri Feb 28 17:32:01 1997 PST | 1997 | 2 | 28 | 17 | 32 | 1.000000
+ Sat Mar 01 17:32:01 1997 PST | 1997 | 3 | 1 | 17 | 32 | 1.000000
+ Tue Dec 30 17:32:01 1997 PST | 1997 | 12 | 30 | 17 | 32 | 1.000000
+ Wed Dec 31 17:32:01 1997 PST | 1997 | 12 | 31 | 17 | 32 | 1.000000
+ Fri Dec 31 17:32:01 1999 PST | 1999 | 12 | 31 | 17 | 32 | 1.000000
+ Sat Jan 01 17:32:01 2000 PST | 2000 | 1 | 1 | 17 | 32 | 1.000000
+ Sun Dec 31 17:32:01 2000 PST | 2000 | 12 | 31 | 17 | 32 | 1.000000
+ Mon Jan 01 17:32:01 2001 PST | 2001 | 1 | 1 | 17 | 32 | 1.000000
(66 rows)
SELECT d1 as timestamptz,
date_part( 'quarter', d1) AS quarter, date_part( 'msec', d1) AS msec,
date_part( 'usec', d1) AS usec
FROM TIMESTAMPTZ_TBL;
- timestamptz | quarter | msec | usec
----------------------------------+---------+-------+----------
- -infinity | | |
- infinity | | |
- Wed Dec 31 16:00:00 1969 PST | 4 | 0 | 0
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:02 1997 PST | 1 | 2000 | 2000000
- Mon Feb 10 17:32:01.4 1997 PST | 1 | 1400 | 1400000
- Mon Feb 10 17:32:01.5 1997 PST | 1 | 1500 | 1500000
- Mon Feb 10 17:32:01.6 1997 PST | 1 | 1600 | 1600000
- Thu Jan 02 00:00:00 1997 PST | 1 | 0 | 0
- Thu Jan 02 03:04:05 1997 PST | 1 | 5000 | 5000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Tue Jun 10 17:32:01 1997 PDT | 2 | 1000 | 1000000
- Sat Sep 22 18:19:20 2001 PDT | 3 | 20000 | 20000000
- Wed Mar 15 08:14:01 2000 PST | 1 | 1000 | 1000000
- Wed Mar 15 04:14:02 2000 PST | 1 | 2000 | 2000000
- Wed Mar 15 02:14:03 2000 PST | 1 | 3000 | 3000000
- Wed Mar 15 03:14:04 2000 PST | 1 | 4000 | 4000000
- Wed Mar 15 01:14:05 2000 PST | 1 | 5000 | 5000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:00 1997 PST | 1 | 0 | 0
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 09:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 09:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 09:32:01 1997 PST | 1 | 1000 | 1000000
- Mon Feb 10 14:32:01 1997 PST | 1 | 1000 | 1000000
- Thu Jul 10 14:32:01 1997 PDT | 3 | 1000 | 1000000
- Tue Jun 10 18:32:01 1997 PDT | 2 | 1000 | 1000000
- Mon Feb 10 17:32:01 1997 PST | 1 | 1000 | 1000000
- Tue Feb 11 17:32:01 1997 PST | 1 | 1000 | 1000000
- Wed Feb 12 17:32:01 1997 PST | 1 | 1000 | 1000000
- Thu Feb 13 17:32:01 1997 PST | 1 | 1000 | 1000000
- Fri Feb 14 17:32:01 1997 PST | 1 | 1000 | 1000000
- Sat Feb 15 17:32:01 1997 PST | 1 | 1000 | 1000000
- Sun Feb 16 17:32:01 1997 PST | 1 | 1000 | 1000000
- Tue Feb 16 17:32:01 0097 PST BC | 1 | 1000 | 1000000
- Sat Feb 16 17:32:01 0097 PST | 1 | 1000 | 1000000
- Thu Feb 16 17:32:01 0597 PST | 1 | 1000 | 1000000
- Tue Feb 16 17:32:01 1097 PST | 1 | 1000 | 1000000
- Sat Feb 16 17:32:01 1697 PST | 1 | 1000 | 1000000
- Thu Feb 16 17:32:01 1797 PST | 1 | 1000 | 1000000
- Tue Feb 16 17:32:01 1897 PST | 1 | 1000 | 1000000
- Sun Feb 16 17:32:01 1997 PST | 1 | 1000 | 1000000
- Sat Feb 16 17:32:01 2097 PST | 1 | 1000 | 1000000
- Wed Feb 28 17:32:01 1996 PST | 1 | 1000 | 1000000
- Thu Feb 29 17:32:01 1996 PST | 1 | 1000 | 1000000
- Fri Mar 01 17:32:01 1996 PST | 1 | 1000 | 1000000
- Mon Dec 30 17:32:01 1996 PST | 4 | 1000 | 1000000
- Tue Dec 31 17:32:01 1996 PST | 4 | 1000 | 1000000
- Wed Jan 01 17:32:01 1997 PST | 1 | 1000 | 1000000
- Fri Feb 28 17:32:01 1997 PST | 1 | 1000 | 1000000
- Sat Mar 01 17:32:01 1997 PST | 1 | 1000 | 1000000
- Tue Dec 30 17:32:01 1997 PST | 4 | 1000 | 1000000
- Wed Dec 31 17:32:01 1997 PST | 4 | 1000 | 1000000
- Fri Dec 31 17:32:01 1999 PST | 4 | 1000 | 1000000
- Sat Jan 01 17:32:01 2000 PST | 1 | 1000 | 1000000
- Sun Dec 31 17:32:01 2000 PST | 4 | 1000 | 1000000
- Mon Jan 01 17:32:01 2001 PST | 1 | 1000 | 1000000
+ timestamptz | quarter | msec | usec
+---------------------------------+---------+-----------+----------
+ -infinity | | |
+ infinity | | |
+ Wed Dec 31 16:00:00 1969 PST | 4 | 0.000 | 0
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:02 1997 PST | 1 | 2000.000 | 2000000
+ Mon Feb 10 17:32:01.4 1997 PST | 1 | 1400.000 | 1400000
+ Mon Feb 10 17:32:01.5 1997 PST | 1 | 1500.000 | 1500000
+ Mon Feb 10 17:32:01.6 1997 PST | 1 | 1600.000 | 1600000
+ Thu Jan 02 00:00:00 1997 PST | 1 | 0.000 | 0
+ Thu Jan 02 03:04:05 1997 PST | 1 | 5000.000 | 5000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Tue Jun 10 17:32:01 1997 PDT | 2 | 1000.000 | 1000000
+ Sat Sep 22 18:19:20 2001 PDT | 3 | 20000.000 | 20000000
+ Wed Mar 15 08:14:01 2000 PST | 1 | 1000.000 | 1000000
+ Wed Mar 15 04:14:02 2000 PST | 1 | 2000.000 | 2000000
+ Wed Mar 15 02:14:03 2000 PST | 1 | 3000.000 | 3000000
+ Wed Mar 15 03:14:04 2000 PST | 1 | 4000.000 | 4000000
+ Wed Mar 15 01:14:05 2000 PST | 1 | 5000.000 | 5000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:00 1997 PST | 1 | 0.000 | 0
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 09:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 09:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 09:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Mon Feb 10 14:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Thu Jul 10 14:32:01 1997 PDT | 3 | 1000.000 | 1000000
+ Tue Jun 10 18:32:01 1997 PDT | 2 | 1000.000 | 1000000
+ Mon Feb 10 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Tue Feb 11 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Wed Feb 12 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Thu Feb 13 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Fri Feb 14 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Sat Feb 15 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Sun Feb 16 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Tue Feb 16 17:32:01 0097 PST BC | 1 | 1000.000 | 1000000
+ Sat Feb 16 17:32:01 0097 PST | 1 | 1000.000 | 1000000
+ Thu Feb 16 17:32:01 0597 PST | 1 | 1000.000 | 1000000
+ Tue Feb 16 17:32:01 1097 PST | 1 | 1000.000 | 1000000
+ Sat Feb 16 17:32:01 1697 PST | 1 | 1000.000 | 1000000
+ Thu Feb 16 17:32:01 1797 PST | 1 | 1000.000 | 1000000
+ Tue Feb 16 17:32:01 1897 PST | 1 | 1000.000 | 1000000
+ Sun Feb 16 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Sat Feb 16 17:32:01 2097 PST | 1 | 1000.000 | 1000000
+ Wed Feb 28 17:32:01 1996 PST | 1 | 1000.000 | 1000000
+ Thu Feb 29 17:32:01 1996 PST | 1 | 1000.000 | 1000000
+ Fri Mar 01 17:32:01 1996 PST | 1 | 1000.000 | 1000000
+ Mon Dec 30 17:32:01 1996 PST | 4 | 1000.000 | 1000000
+ Tue Dec 31 17:32:01 1996 PST | 4 | 1000.000 | 1000000
+ Wed Jan 01 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Fri Feb 28 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Sat Mar 01 17:32:01 1997 PST | 1 | 1000.000 | 1000000
+ Tue Dec 30 17:32:01 1997 PST | 4 | 1000.000 | 1000000
+ Wed Dec 31 17:32:01 1997 PST | 4 | 1000.000 | 1000000
+ Fri Dec 31 17:32:01 1999 PST | 4 | 1000.000 | 1000000
+ Sat Jan 01 17:32:01 2000 PST | 1 | 1000.000 | 1000000
+ Sun Dec 31 17:32:01 2000 PST | 4 | 1000.000 | 1000000
+ Mon Jan 01 17:32:01 2001 PST | 1 | 1000.000 | 1000000
(66 rows)
SELECT d1 as timestamptz,
diff --git a/src/test/regress/sql/date.sql b/src/test/regress/sql/date.sql
index 488f5faa07..5c55abaf32 100644
--- a/src/test/regress/sql/date.sql
+++ b/src/test/regress/sql/date.sql
@@ -329,8 +329,8 @@ CREATE TABLE DATE_TBL (f1 date);
--
-- oscillating fields from non-finite date/timestamptz:
--
-SELECT EXTRACT(HOUR FROM DATE 'infinity'); -- NULL
-SELECT EXTRACT(HOUR FROM DATE '-infinity'); -- NULL
+SELECT EXTRACT(DAY FROM DATE 'infinity'); -- NULL
+SELECT EXTRACT(DAY FROM DATE '-infinity'); -- NULL
SELECT EXTRACT(HOUR FROM TIMESTAMP 'infinity'); -- NULL
SELECT EXTRACT(HOUR FROM TIMESTAMP '-infinity'); -- NULL
SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity'); -- NULL
@@ -371,7 +371,7 @@ CREATE TABLE DATE_TBL (f1 date);
--
-- wrong fields from non-finite date:
--
-SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- ERROR: timestamp units "microsec" not recognized
+SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- ERROR: date units "microsec" not recognized
-- test constructors
select make_date(2013, 7, 15);
--
2.28.0
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
Here is a new patch series version.
I have created a new internal function for converting integers to
numeric, to make the implementation a bit more elegant and compact.
I reviewed the 0002 patch, finding one bug (in int8_sum) and a few
more calls of int8_numeric that could be converted. I think the
attached updated version is committable, and I'd recommend going
ahead with that regardless of the rest of this. I hadn't realized
how many random calls of int8_numeric and int4_numeric we'd grown,
but there are a lot, so this is nice cleanup.
I continue to think that we can't commit 0003 in this form, because
of the breakage that will ensure in stored views. As I said upthread,
we should leave the existing SQL-exposed functions alone, invent
new ones that return numeric, and alter the parser to translate
EXTRACT constructs to the new functions. This approach would also
provide an "out" for anyone who does complain about the performance
cost --- they can just continue to use the old functions.
regards, tom lane
Attachments:
v3-0002-expose-int64_to_numeric.patchtext/x-diff; charset=us-ascii; name=v3-0002-expose-int64_to_numeric.patchDownload
diff --git a/contrib/btree_gist/btree_numeric.c b/contrib/btree_gist/btree_numeric.c
index d66901680e..35e466cdd9 100644
--- a/contrib/btree_gist/btree_numeric.c
+++ b/contrib/btree_gist/btree_numeric.c
@@ -195,7 +195,7 @@ gbt_numeric_penalty(PG_FUNCTION_ARGS)
}
else
{
- Numeric nul = DatumGetNumeric(DirectFunctionCall1(int4_numeric, Int32GetDatum(0)));
+ Numeric nul = int64_to_numeric(0);
*result = 0.0;
diff --git a/contrib/jsonb_plperl/jsonb_plperl.c b/contrib/jsonb_plperl/jsonb_plperl.c
index b81ba54b80..22e90afe1b 100644
--- a/contrib/jsonb_plperl/jsonb_plperl.c
+++ b/contrib/jsonb_plperl/jsonb_plperl.c
@@ -216,9 +216,7 @@ SV_to_JsonbValue(SV *in, JsonbParseState **jsonb_state, bool is_elem)
IV ival = SvIV(in);
out.type = jbvNumeric;
- out.val.numeric =
- DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- Int64GetDatum((int64) ival)));
+ out.val.numeric = int64_to_numeric(ival);
}
else if (SvNOK(in))
{
diff --git a/src/backend/utils/adt/cash.c b/src/backend/utils/adt/cash.c
index 6515fc8ec6..d093ce8038 100644
--- a/src/backend/utils/adt/cash.c
+++ b/src/backend/utils/adt/cash.c
@@ -1042,7 +1042,7 @@ cash_numeric(PG_FUNCTION_ARGS)
fpoint = 2;
/* convert the integral money value to numeric */
- result = DirectFunctionCall1(int8_numeric, Int64GetDatum(money));
+ result = NumericGetDatum(int64_to_numeric(money));
/* scale appropriately, if needed */
if (fpoint > 0)
@@ -1056,8 +1056,7 @@ cash_numeric(PG_FUNCTION_ARGS)
scale = 1;
for (i = 0; i < fpoint; i++)
scale *= 10;
- numeric_scale = DirectFunctionCall1(int8_numeric,
- Int64GetDatum(scale));
+ numeric_scale = NumericGetDatum(int64_to_numeric(scale));
/*
* Given integral inputs approaching INT64_MAX, select_div_scale()
@@ -1107,7 +1106,7 @@ numeric_cash(PG_FUNCTION_ARGS)
scale *= 10;
/* multiply the input amount by scale factor */
- numeric_scale = DirectFunctionCall1(int8_numeric, Int64GetDatum(scale));
+ numeric_scale = NumericGetDatum(int64_to_numeric(scale));
amount = DirectFunctionCall2(numeric_mul, amount, numeric_scale);
/* note that numeric_int8 will round to nearest integer for us */
diff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c
index 2320c06a9b..7def7392b9 100644
--- a/src/backend/utils/adt/dbsize.c
+++ b/src/backend/utils/adt/dbsize.c
@@ -579,14 +579,6 @@ numeric_to_cstring(Numeric n)
return DatumGetCString(DirectFunctionCall1(numeric_out, d));
}
-static Numeric
-int64_to_numeric(int64 v)
-{
- Datum d = Int64GetDatum(v);
-
- return DatumGetNumeric(DirectFunctionCall1(int8_numeric, d));
-}
-
static bool
numeric_is_less(Numeric a, Numeric b)
{
@@ -615,9 +607,9 @@ numeric_half_rounded(Numeric n)
Datum two;
Datum result;
- zero = DirectFunctionCall1(int8_numeric, Int64GetDatum(0));
- one = DirectFunctionCall1(int8_numeric, Int64GetDatum(1));
- two = DirectFunctionCall1(int8_numeric, Int64GetDatum(2));
+ zero = NumericGetDatum(int64_to_numeric(0));
+ one = NumericGetDatum(int64_to_numeric(1));
+ two = NumericGetDatum(int64_to_numeric(2));
if (DatumGetBool(DirectFunctionCall2(numeric_ge, d, zero)))
d = DirectFunctionCall2(numeric_add, d, one);
@@ -632,12 +624,10 @@ static Numeric
numeric_shift_right(Numeric n, unsigned count)
{
Datum d = NumericGetDatum(n);
- Datum divisor_int64;
Datum divisor_numeric;
Datum result;
- divisor_int64 = Int64GetDatum((int64) (1 << count));
- divisor_numeric = DirectFunctionCall1(int8_numeric, divisor_int64);
+ divisor_numeric = NumericGetDatum(int64_to_numeric(1 << count));
result = DirectFunctionCall2(numeric_div_trunc, d, divisor_numeric);
return DatumGetNumeric(result);
}
@@ -832,8 +822,7 @@ pg_size_bytes(PG_FUNCTION_ARGS)
{
Numeric mul_num;
- mul_num = DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- Int64GetDatum(multiplier)));
+ mul_num = int64_to_numeric(multiplier);
num = DatumGetNumeric(DirectFunctionCall2(numeric_mul,
NumericGetDatum(mul_num),
diff --git a/src/backend/utils/adt/formatting.c b/src/backend/utils/adt/formatting.c
index 7d09537d82..f9aa968f09 100644
--- a/src/backend/utils/adt/formatting.c
+++ b/src/backend/utils/adt/formatting.c
@@ -6070,10 +6070,8 @@ numeric_to_number(PG_FUNCTION_ARGS)
if (IS_MULTI(&Num))
{
Numeric x;
- Numeric a = DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- Int32GetDatum(10)));
- Numeric b = DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- Int32GetDatum(-Num.multi)));
+ Numeric a = int64_to_numeric(10);
+ Numeric b = int64_to_numeric(-Num.multi);
x = DatumGetNumeric(DirectFunctionCall2(numeric_power,
NumericGetDatum(a),
@@ -6162,10 +6160,8 @@ numeric_to_char(PG_FUNCTION_ARGS)
if (IS_MULTI(&Num))
{
- Numeric a = DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- Int32GetDatum(10)));
- Numeric b = DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- Int32GetDatum(Num.multi)));
+ Numeric a = int64_to_numeric(10);
+ Numeric b = int64_to_numeric(Num.multi);
x = DatumGetNumeric(DirectFunctionCall2(numeric_power,
NumericGetDatum(a),
@@ -6339,11 +6335,8 @@ int8_to_char(PG_FUNCTION_ARGS)
else if (IS_EEEE(&Num))
{
/* to avoid loss of precision, must go via numeric not float8 */
- Numeric val;
-
- val = DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- Int64GetDatum(value)));
- orgnum = numeric_out_sci(val, Num.post);
+ orgnum = numeric_out_sci(int64_to_numeric(value),
+ Num.post);
/*
* numeric_out_sci() does not emit a sign for positive numbers. We
diff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c
index f146767bfc..7403c760b4 100644
--- a/src/backend/utils/adt/jsonpath_exec.c
+++ b/src/backend/utils/adt/jsonpath_exec.c
@@ -842,9 +842,7 @@ executeItemOptUnwrapTarget(JsonPathExecContext *cxt, JsonPathItem *jsp,
lastjbv = hasNext ? &tmpjbv : palloc(sizeof(*lastjbv));
lastjbv->type = jbvNumeric;
- lastjbv->val.numeric =
- DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- Int32GetDatum(last)));
+ lastjbv->val.numeric = int64_to_numeric(last);
res = executeNextItem(cxt, jsp, &elem,
lastjbv, found, hasNext);
@@ -1012,9 +1010,7 @@ executeItemOptUnwrapTarget(JsonPathExecContext *cxt, JsonPathItem *jsp,
jb = palloc(sizeof(*jb));
jb->type = jbvNumeric;
- jb->val.numeric =
- DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- Int32GetDatum(size)));
+ jb->val.numeric = int64_to_numeric(size);
res = executeNextItem(cxt, jsp, NULL, jb, found, false);
}
@@ -1979,8 +1975,7 @@ executeKeyValueMethod(JsonPathExecContext *cxt, JsonPathItem *jsp,
id += (int64) cxt->baseObject.id * INT64CONST(10000000000);
idval.type = jbvNumeric;
- idval.val.numeric = DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- Int64GetDatum(id)));
+ idval.val.numeric = int64_to_numeric(id);
it = JsonbIteratorInit(jbc);
diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c
index ed825a1fdd..d2cc74b284 100644
--- a/src/backend/utils/adt/numeric.c
+++ b/src/backend/utils/adt/numeric.c
@@ -4073,23 +4073,29 @@ numeric_trim_scale(PG_FUNCTION_ARGS)
* ----------------------------------------------------------------------
*/
-
-Datum
-int4_numeric(PG_FUNCTION_ARGS)
+Numeric
+int64_to_numeric(int64 val)
{
- int32 val = PG_GETARG_INT32(0);
Numeric res;
NumericVar result;
init_var(&result);
- int64_to_numericvar((int64) val, &result);
+ int64_to_numericvar(val, &result);
res = make_result(&result);
free_var(&result);
- PG_RETURN_NUMERIC(res);
+ return res;
+}
+
+Datum
+int4_numeric(PG_FUNCTION_ARGS)
+{
+ int32 val = PG_GETARG_INT32(0);
+
+ PG_RETURN_NUMERIC(int64_to_numeric(val));
}
int32
@@ -4174,18 +4180,8 @@ Datum
int8_numeric(PG_FUNCTION_ARGS)
{
int64 val = PG_GETARG_INT64(0);
- Numeric res;
- NumericVar result;
- init_var(&result);
-
- int64_to_numericvar(val, &result);
-
- res = make_result(&result);
-
- free_var(&result);
-
- PG_RETURN_NUMERIC(res);
+ PG_RETURN_NUMERIC(int64_to_numeric(val));
}
@@ -4224,18 +4220,8 @@ Datum
int2_numeric(PG_FUNCTION_ARGS)
{
int16 val = PG_GETARG_INT16(0);
- Numeric res;
- NumericVar result;
-
- init_var(&result);
-
- int64_to_numericvar((int64) val, &result);
-
- res = make_result(&result);
- free_var(&result);
-
- PG_RETURN_NUMERIC(res);
+ PG_RETURN_NUMERIC(int64_to_numeric(val));
}
@@ -5290,11 +5276,7 @@ int2_accum(PG_FUNCTION_ARGS)
#ifdef HAVE_INT128
do_int128_accum(state, (int128) PG_GETARG_INT16(1));
#else
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int2_numeric,
- PG_GETARG_DATUM(1)));
- do_numeric_accum(state, newval);
+ do_numeric_accum(state, int64_to_numeric(PG_GETARG_INT16(1)));
#endif
}
@@ -5317,11 +5299,7 @@ int4_accum(PG_FUNCTION_ARGS)
#ifdef HAVE_INT128
do_int128_accum(state, (int128) PG_GETARG_INT32(1));
#else
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- PG_GETARG_DATUM(1)));
- do_numeric_accum(state, newval);
+ do_numeric_accum(state, int64_to_numeric(PG_GETARG_INT32(1)));
#endif
}
@@ -5340,13 +5318,7 @@ int8_accum(PG_FUNCTION_ARGS)
state = makeNumericAggState(fcinfo, true);
if (!PG_ARGISNULL(1))
- {
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- PG_GETARG_DATUM(1)));
- do_numeric_accum(state, newval);
- }
+ do_numeric_accum(state, int64_to_numeric(PG_GETARG_INT64(1)));
PG_RETURN_POINTER(state);
}
@@ -5570,11 +5542,7 @@ int8_avg_accum(PG_FUNCTION_ARGS)
#ifdef HAVE_INT128
do_int128_accum(state, (int128) PG_GETARG_INT64(1));
#else
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- PG_GETARG_DATUM(1)));
- do_numeric_accum(state, newval);
+ do_numeric_accum(state, int64_to_numeric(PG_GETARG_INT64(1)));
#endif
}
@@ -5767,13 +5735,8 @@ int2_accum_inv(PG_FUNCTION_ARGS)
#ifdef HAVE_INT128
do_int128_discard(state, (int128) PG_GETARG_INT16(1));
#else
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int2_numeric,
- PG_GETARG_DATUM(1)));
-
/* Should never fail, all inputs have dscale 0 */
- if (!do_numeric_discard(state, newval))
+ if (!do_numeric_discard(state, int64_to_numeric(PG_GETARG_INT16(1))))
elog(ERROR, "do_numeric_discard failed unexpectedly");
#endif
}
@@ -5797,13 +5760,8 @@ int4_accum_inv(PG_FUNCTION_ARGS)
#ifdef HAVE_INT128
do_int128_discard(state, (int128) PG_GETARG_INT32(1));
#else
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int4_numeric,
- PG_GETARG_DATUM(1)));
-
/* Should never fail, all inputs have dscale 0 */
- if (!do_numeric_discard(state, newval))
+ if (!do_numeric_discard(state, int64_to_numeric(PG_GETARG_INT32(1))))
elog(ERROR, "do_numeric_discard failed unexpectedly");
#endif
}
@@ -5824,13 +5782,8 @@ int8_accum_inv(PG_FUNCTION_ARGS)
if (!PG_ARGISNULL(1))
{
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- PG_GETARG_DATUM(1)));
-
/* Should never fail, all inputs have dscale 0 */
- if (!do_numeric_discard(state, newval))
+ if (!do_numeric_discard(state, int64_to_numeric(PG_GETARG_INT64(1))))
elog(ERROR, "do_numeric_discard failed unexpectedly");
}
@@ -5853,13 +5806,8 @@ int8_avg_accum_inv(PG_FUNCTION_ARGS)
#ifdef HAVE_INT128
do_int128_discard(state, (int128) PG_GETARG_INT64(1));
#else
- Numeric newval;
-
- newval = DatumGetNumeric(DirectFunctionCall1(int8_numeric,
- PG_GETARG_DATUM(1)));
-
/* Should never fail, all inputs have dscale 0 */
- if (!do_numeric_discard(state, newval))
+ if (!do_numeric_discard(state, int64_to_numeric(PG_GETARG_INT64(1))))
elog(ERROR, "do_numeric_discard failed unexpectedly");
#endif
}
@@ -5914,8 +5862,7 @@ numeric_poly_avg(PG_FUNCTION_ARGS)
int128_to_numericvar(state->sumX, &result);
- countd = DirectFunctionCall1(int8_numeric,
- Int64GetDatumFast(state->N));
+ countd = NumericGetDatum(int64_to_numeric(state->N));
sumd = NumericGetDatum(make_result(&result));
free_var(&result);
@@ -5951,7 +5898,7 @@ numeric_avg(PG_FUNCTION_ARGS)
if (state->nInfcount > 0)
PG_RETURN_NUMERIC(make_result(&const_ninf));
- N_datum = DirectFunctionCall1(int8_numeric, Int64GetDatum(state->N));
+ N_datum = NumericGetDatum(int64_to_numeric(state->N));
init_var(&sumX_var);
accum_sum_final(&state->sumX, &sumX_var);
@@ -6411,7 +6358,6 @@ Datum
int8_sum(PG_FUNCTION_ARGS)
{
Numeric oldsum;
- Datum newval;
if (PG_ARGISNULL(0))
{
@@ -6419,8 +6365,7 @@ int8_sum(PG_FUNCTION_ARGS)
if (PG_ARGISNULL(1))
PG_RETURN_NULL(); /* still no non-null */
/* This is the first non-null input. */
- newval = DirectFunctionCall1(int8_numeric, PG_GETARG_DATUM(1));
- PG_RETURN_DATUM(newval);
+ PG_RETURN_NUMERIC(int64_to_numeric(PG_GETARG_INT64(1)));
}
/*
@@ -6436,10 +6381,9 @@ int8_sum(PG_FUNCTION_ARGS)
PG_RETURN_NUMERIC(oldsum);
/* OK to do the addition. */
- newval = DirectFunctionCall1(int8_numeric, PG_GETARG_DATUM(1));
-
PG_RETURN_DATUM(DirectFunctionCall2(numeric_add,
- NumericGetDatum(oldsum), newval));
+ NumericGetDatum(oldsum),
+ NumericGetDatum(int64_to_numeric(PG_GETARG_INT64(1)))));
}
@@ -6618,10 +6562,8 @@ int8_avg(PG_FUNCTION_ARGS)
if (transdata->count == 0)
PG_RETURN_NULL();
- countd = DirectFunctionCall1(int8_numeric,
- Int64GetDatumFast(transdata->count));
- sumd = DirectFunctionCall1(int8_numeric,
- Int64GetDatumFast(transdata->sum));
+ countd = NumericGetDatum(int64_to_numeric(transdata->count));
+ sumd = NumericGetDatum(int64_to_numeric(transdata->sum));
PG_RETURN_DATUM(DirectFunctionCall2(numeric_div, sumd, countd));
}
diff --git a/src/include/utils/numeric.h b/src/include/utils/numeric.h
index 0b7d4ba3c4..2a768b9a04 100644
--- a/src/include/utils/numeric.h
+++ b/src/include/utils/numeric.h
@@ -62,6 +62,8 @@ int32 numeric_maximum_size(int32 typmod);
extern char *numeric_out_sci(Numeric num, int scale);
extern char *numeric_normalize(Numeric num);
+extern Numeric int64_to_numeric(int64 val);
+
extern Numeric numeric_add_opt_error(Numeric num1, Numeric num2,
bool *have_error);
extern Numeric numeric_sub_opt_error(Numeric num1, Numeric num2,
po 7. 9. 2020 v 1:46 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
Here is a new patch series version.
I have created a new internal function for converting integers to
numeric, to make the implementation a bit more elegant and compact.I reviewed the 0002 patch, finding one bug (in int8_sum) and a few
more calls of int8_numeric that could be converted. I think the
attached updated version is committable, and I'd recommend going
ahead with that regardless of the rest of this. I hadn't realized
how many random calls of int8_numeric and int4_numeric we'd grown,
but there are a lot, so this is nice cleanup.
This patch is a clean win.
+1
I continue to think that we can't commit 0003 in this form, because
of the breakage that will ensure in stored views. As I said upthread,
we should leave the existing SQL-exposed functions alone, invent
new ones that return numeric, and alter the parser to translate
EXTRACT constructs to the new functions. This approach would also
provide an "out" for anyone who does complain about the performance
cost --- they can just continue to use the old functions.
+1
Regards
Pavel
Show quoted text
regards, tom lane
On 2020-09-07 01:46, Tom Lane wrote:
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
Here is a new patch series version.
I have created a new internal function for converting integers to
numeric, to make the implementation a bit more elegant and compact.I reviewed the 0002 patch, finding one bug (in int8_sum)
Ouch, no test coverage. Should we perhaps remove this function, since
it's obsolete and unused?
and a few
more calls of int8_numeric that could be converted. I think the
attached updated version is committable, and I'd recommend going
ahead with that regardless of the rest of this. I hadn't realized
how many random calls of int8_numeric and int4_numeric we'd grown,
but there are a lot, so this is nice cleanup.
Yes, please go ahead with it.
I continue to think that we can't commit 0003 in this form, because
of the breakage that will ensure in stored views. As I said upthread,
we should leave the existing SQL-exposed functions alone, invent
new ones that return numeric, and alter the parser to translate
EXTRACT constructs to the new functions. This approach would also
provide an "out" for anyone who does complain about the performance
cost --- they can just continue to use the old functions.
Okay, I will continue looking into this.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
On 2020-09-07 01:46, Tom Lane wrote:
I reviewed the 0002 patch, finding one bug (in int8_sum)
Ouch, no test coverage. Should we perhaps remove this function, since
it's obsolete and unused?
I don't feel a need to.
and a few
more calls of int8_numeric that could be converted. I think the
attached updated version is committable, and I'd recommend going
ahead with that regardless of the rest of this. I hadn't realized
how many random calls of int8_numeric and int4_numeric we'd grown,
but there are a lot, so this is nice cleanup.
Yes, please go ahead with it.
It's your patch, I figured you'd want to commit it.
regards, tom lane
On 2020-09-09 15:38, Tom Lane wrote:
and a few
more calls of int8_numeric that could be converted. I think the
attached updated version is committable, and I'd recommend going
ahead with that regardless of the rest of this. I hadn't realized
how many random calls of int8_numeric and int4_numeric we'd grown,
but there are a lot, so this is nice cleanup.Yes, please go ahead with it.
It's your patch, I figured you'd want to commit it.
ok done
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Sep 09, 2020 at 08:47:36PM +0200, Peter Eisentraut wrote:
ok done
As far as I can see, patches 0001 and 0002 have been already applied,
but not 0003. Could you send a rebase to allow the CF bot to run, at
least?
--
Michael
On Thu, Sep 17, 2020 at 02:53:27PM +0900, Michael Paquier wrote:
As far as I can see, patches 0001 and 0002 have been already applied,
but not 0003. Could you send a rebase to allow the CF bot to run, at
least?
This was two weeks ago. Looking at 0003, the thing is not really
complicated, but as this thread has stalled I have marked the entry as
RwF.
--
Michael
I wrote:
However: suppose that we continue to translate these things into FuncExpr
nodes, the same as always, but we add a new CoercionForm variant, say
COERCE_SQL_SYNTAX. 99% of the system ignores FuncExpr.funcformat,
and would continue to do so, but ruleutils.c would take it to mean
that (1) the call should be reverse-listed as some special SQL syntax
and (2) the funcid is one of a small set of built-in functions for
which ruleutils.c knows what to emit. (If it doesn't recognize the
funcid, it could either throw an error, or fall back to normal display
of the node.) For cases such as EXTRACT, this would also represent
a promise that specific arguments are Const nodes from which the
desired keyword can be extracted.
Attached is a draft patch that does this. I'm fairly pleased with it,
but there are some loose ends as described below. As the patch stands,
it reverse-lists all our special-format function call syntaxes
*except* EXTRACT. I left that out since I think we want to apply the
reverse-listing change when we add the numeric-output extraction
functions, as I said upthread.
The main thing that's incomplete here is that the switch on function
OID fails to cover some cases that ought to be covered, as a result
of limitations of Gen_fmgrtab.pl:
* Some C functions such as text_substr have multiple pg_proc entries,
and Gen_fmgrtab.pl chooses the wrong one for our purpose. We could
either invent new Gen_fmgrtab.pl behavior to allow having macros for
all the pg_proc entries, or we could add duplicate C functions so that
the pg_proc entries can point to different C symbols.
* Some of the functions we need to reference aren't C functions at
all, but SQL functions, for instance OID 1305 is defined as
select ($1, ($1 + $2)) overlaps ($3, ($3 + $4))
I think our best bet here is to replace these SQL definitions with
C equivalents, because really this implementation is pretty sucky.
Even if we manage to inline the SQL definition, that's expensive
to do; and evaluating some of the arguments twice is not nice either.
This is kind of an abuse of "CoercionForm", since that typedef name
implies that it only talks about how to handle cast cases, but
semantically it's always been a how-to-display-function-calls thing.
We could either hold our noses about that or rename the typedef.
I did nothing about that here, since it'd bloat the patch without
making anything but cosmetic changes. I'm tempted to propose though
that we rename "CoercionForm" to "DisplayForm" and rename its
COERCE_XXX values to DISPLAY_XXX, to make this less confusing.
Another bit of follow-up work we could contemplate is to get rid of
the SQLValueFunction node type, since there's nothing it does that
we couldn't do with regular FuncExpr nodes and COERCE_SQL_SYNTAX.
But that's just cleanup, and I don't think it would save a very
large amount of code.
Thoughts?
regards, tom lane
Attachments:
reverse-list-special-sql-syntaxes-1.patchtext/x-diff; charset=us-ascii; name=reverse-list-special-sql-syntaxes-1.patchDownload
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 2b4d7654cc..f14236ad3a 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -2682,11 +2682,12 @@ _copyFuncCall(const FuncCall *from)
COPY_NODE_FIELD(args);
COPY_NODE_FIELD(agg_order);
COPY_NODE_FIELD(agg_filter);
+ COPY_NODE_FIELD(over);
COPY_SCALAR_FIELD(agg_within_group);
COPY_SCALAR_FIELD(agg_star);
COPY_SCALAR_FIELD(agg_distinct);
COPY_SCALAR_FIELD(func_variadic);
- COPY_NODE_FIELD(over);
+ COPY_SCALAR_FIELD(funcformat);
COPY_LOCATION_FIELD(location);
return newnode;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index e2d1b987bf..8985b11f8f 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2377,11 +2377,12 @@ _equalFuncCall(const FuncCall *a, const FuncCall *b)
COMPARE_NODE_FIELD(args);
COMPARE_NODE_FIELD(agg_order);
COMPARE_NODE_FIELD(agg_filter);
+ COMPARE_NODE_FIELD(over);
COMPARE_SCALAR_FIELD(agg_within_group);
COMPARE_SCALAR_FIELD(agg_star);
COMPARE_SCALAR_FIELD(agg_distinct);
COMPARE_SCALAR_FIELD(func_variadic);
- COMPARE_NODE_FIELD(over);
+ COMPARE_SCALAR_FIELD(funcformat);
COMPARE_LOCATION_FIELD(location);
return true;
diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c
index 49de285f01..ee033ae779 100644
--- a/src/backend/nodes/makefuncs.c
+++ b/src/backend/nodes/makefuncs.c
@@ -582,7 +582,7 @@ makeDefElemExtended(char *nameSpace, char *name, Node *arg,
* supply. Any non-default parameters have to be inserted by the caller.
*/
FuncCall *
-makeFuncCall(List *name, List *args, int location)
+makeFuncCall(List *name, List *args, CoercionForm funcformat, int location)
{
FuncCall *n = makeNode(FuncCall);
@@ -590,11 +590,12 @@ makeFuncCall(List *name, List *args, int location)
n->args = args;
n->agg_order = NIL;
n->agg_filter = NULL;
+ n->over = NULL;
n->agg_within_group = false;
n->agg_star = false;
n->agg_distinct = false;
n->func_variadic = false;
- n->over = NULL;
+ n->funcformat = funcformat;
n->location = location;
return n;
}
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 530328af43..cafe19b2f0 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -2766,11 +2766,12 @@ _outFuncCall(StringInfo str, const FuncCall *node)
WRITE_NODE_FIELD(args);
WRITE_NODE_FIELD(agg_order);
WRITE_NODE_FIELD(agg_filter);
+ WRITE_NODE_FIELD(over);
WRITE_BOOL_FIELD(agg_within_group);
WRITE_BOOL_FIELD(agg_star);
WRITE_BOOL_FIELD(agg_distinct);
WRITE_BOOL_FIELD(func_variadic);
- WRITE_NODE_FIELD(over);
+ WRITE_ENUM_FIELD(funcformat, CoercionForm);
WRITE_LOCATION_FIELD(location);
}
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 480d168346..20e840ce20 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -12977,6 +12977,7 @@ a_expr: c_expr { $$ = $1; }
{
$$ = (Node *) makeFuncCall(SystemFuncName("timezone"),
list_make2($5, $1),
+ COERCE_SQL_SYNTAX,
@2);
}
/*
@@ -13040,6 +13041,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("like_escape"),
list_make2($3, $5),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_LIKE, "~~",
$1, (Node *) n, @2);
@@ -13053,6 +13055,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("like_escape"),
list_make2($4, $6),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_LIKE, "!~~",
$1, (Node *) n, @2);
@@ -13066,6 +13069,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("like_escape"),
list_make2($3, $5),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_ILIKE, "~~*",
$1, (Node *) n, @2);
@@ -13079,6 +13083,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("like_escape"),
list_make2($4, $6),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_ILIKE, "!~~*",
$1, (Node *) n, @2);
@@ -13088,6 +13093,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("similar_to_escape"),
list_make1($4),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_SIMILAR, "~",
$1, (Node *) n, @2);
@@ -13096,6 +13102,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("similar_to_escape"),
list_make2($4, $6),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_SIMILAR, "~",
$1, (Node *) n, @2);
@@ -13104,6 +13111,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("similar_to_escape"),
list_make1($5),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_SIMILAR, "!~",
$1, (Node *) n, @2);
@@ -13112,6 +13120,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("similar_to_escape"),
list_make2($5, $7),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_SIMILAR, "!~",
$1, (Node *) n, @2);
@@ -13172,6 +13181,7 @@ a_expr: c_expr { $$ = $1; }
parser_errposition(@3)));
$$ = (Node *) makeFuncCall(SystemFuncName("overlaps"),
list_concat($1, $3),
+ COERCE_SQL_SYNTAX,
@2);
}
| a_expr IS TRUE_P %prec IS
@@ -13359,19 +13369,33 @@ a_expr: c_expr { $$ = $1; }
}
| a_expr IS NORMALIZED %prec IS
{
- $$ = (Node *) makeFuncCall(SystemFuncName("is_normalized"), list_make1($1), @2);
+ $$ = (Node *) makeFuncCall(SystemFuncName("is_normalized"),
+ list_make1($1),
+ COERCE_SQL_SYNTAX,
+ @2);
}
| a_expr IS unicode_normal_form NORMALIZED %prec IS
{
- $$ = (Node *) makeFuncCall(SystemFuncName("is_normalized"), list_make2($1, makeStringConst($3, @3)), @2);
+ $$ = (Node *) makeFuncCall(SystemFuncName("is_normalized"),
+ list_make2($1, makeStringConst($3, @3)),
+ COERCE_SQL_SYNTAX,
+ @2);
}
| a_expr IS NOT NORMALIZED %prec IS
{
- $$ = makeNotExpr((Node *) makeFuncCall(SystemFuncName("is_normalized"), list_make1($1), @2), @2);
+ $$ = makeNotExpr((Node *) makeFuncCall(SystemFuncName("is_normalized"),
+ list_make1($1),
+ COERCE_SQL_SYNTAX,
+ @2),
+ @2);
}
| a_expr IS NOT unicode_normal_form NORMALIZED %prec IS
{
- $$ = makeNotExpr((Node *) makeFuncCall(SystemFuncName("is_normalized"), list_make2($1, makeStringConst($4, @4)), @2), @2);
+ $$ = makeNotExpr((Node *) makeFuncCall(SystemFuncName("is_normalized"),
+ list_make2($1, makeStringConst($4, @4)),
+ COERCE_SQL_SYNTAX,
+ @2),
+ @2);
}
| DEFAULT
{
@@ -13621,31 +13645,41 @@ c_expr: columnref { $$ = $1; }
func_application: func_name '(' ')'
{
- $$ = (Node *) makeFuncCall($1, NIL, @1);
+ $$ = (Node *) makeFuncCall($1, NIL,
+ COERCE_EXPLICIT_CALL,
+ @1);
}
| func_name '(' func_arg_list opt_sort_clause ')'
{
- FuncCall *n = makeFuncCall($1, $3, @1);
+ FuncCall *n = makeFuncCall($1, $3,
+ COERCE_EXPLICIT_CALL,
+ @1);
n->agg_order = $4;
$$ = (Node *)n;
}
| func_name '(' VARIADIC func_arg_expr opt_sort_clause ')'
{
- FuncCall *n = makeFuncCall($1, list_make1($4), @1);
+ FuncCall *n = makeFuncCall($1, list_make1($4),
+ COERCE_EXPLICIT_CALL,
+ @1);
n->func_variadic = true;
n->agg_order = $5;
$$ = (Node *)n;
}
| func_name '(' func_arg_list ',' VARIADIC func_arg_expr opt_sort_clause ')'
{
- FuncCall *n = makeFuncCall($1, lappend($3, $6), @1);
+ FuncCall *n = makeFuncCall($1, lappend($3, $6),
+ COERCE_EXPLICIT_CALL,
+ @1);
n->func_variadic = true;
n->agg_order = $7;
$$ = (Node *)n;
}
| func_name '(' ALL func_arg_list opt_sort_clause ')'
{
- FuncCall *n = makeFuncCall($1, $4, @1);
+ FuncCall *n = makeFuncCall($1, $4,
+ COERCE_EXPLICIT_CALL,
+ @1);
n->agg_order = $5;
/* Ideally we'd mark the FuncCall node to indicate
* "must be an aggregate", but there's no provision
@@ -13655,7 +13689,9 @@ func_application: func_name '(' ')'
}
| func_name '(' DISTINCT func_arg_list opt_sort_clause ')'
{
- FuncCall *n = makeFuncCall($1, $4, @1);
+ FuncCall *n = makeFuncCall($1, $4,
+ COERCE_EXPLICIT_CALL,
+ @1);
n->agg_order = $5;
n->agg_distinct = true;
$$ = (Node *)n;
@@ -13672,7 +13708,9 @@ func_application: func_name '(' ')'
* so that later processing can detect what the argument
* really was.
*/
- FuncCall *n = makeFuncCall($1, NIL, @1);
+ FuncCall *n = makeFuncCall($1, NIL,
+ COERCE_EXPLICIT_CALL,
+ @1);
n->agg_star = true;
$$ = (Node *)n;
}
@@ -13746,6 +13784,7 @@ func_expr_common_subexpr:
{
$$ = (Node *) makeFuncCall(SystemFuncName("pg_collation_for"),
list_make1($4),
+ COERCE_SQL_SYNTAX,
@1);
}
| CURRENT_DATE
@@ -13812,31 +13851,49 @@ func_expr_common_subexpr:
{ $$ = makeTypeCast($3, $5, @1); }
| EXTRACT '(' extract_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("date_part"), $3, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("date_part"),
+ $3,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| NORMALIZE '(' a_expr ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("normalize"), list_make1($3), @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("normalize"),
+ list_make1($3),
+ COERCE_SQL_SYNTAX,
+ @1);
}
| NORMALIZE '(' a_expr ',' unicode_normal_form ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("normalize"), list_make2($3, makeStringConst($5, @5)), @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("normalize"),
+ list_make2($3, makeStringConst($5, @5)),
+ COERCE_SQL_SYNTAX,
+ @1);
}
| OVERLAY '(' overlay_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("overlay"), $3, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("overlay"),
+ $3,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| POSITION '(' position_list ')'
{
/* position(A in B) is converted to position(B, A) */
- $$ = (Node *) makeFuncCall(SystemFuncName("position"), $3, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("position"),
+ $3,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| SUBSTRING '(' substr_list ')'
{
/* substring(A from B for C) is converted to
* substring(A, B, C) - thomas 2000-11-28
*/
- $$ = (Node *) makeFuncCall(SystemFuncName("substring"), $3, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("substring"),
+ $3,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| TREAT '(' a_expr AS Typename ')'
{
@@ -13849,28 +13906,41 @@ func_expr_common_subexpr:
* Convert SystemTypeName() to SystemFuncName() even though
* at the moment they result in the same thing.
*/
- $$ = (Node *) makeFuncCall(SystemFuncName(((Value *)llast($5->names))->val.str),
- list_make1($3),
- @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName(((Value *) llast($5->names))->val.str),
+ list_make1($3),
+ COERCE_EXPLICIT_CALL,
+ @1);
}
| TRIM '(' BOTH trim_list ')'
{
/* various trim expressions are defined in SQL
* - thomas 1997-07-19
*/
- $$ = (Node *) makeFuncCall(SystemFuncName("btrim"), $4, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("btrim"),
+ $4,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| TRIM '(' LEADING trim_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("ltrim"), $4, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("ltrim"),
+ $4,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| TRIM '(' TRAILING trim_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("rtrim"), $4, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("rtrim"),
+ $4,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| TRIM '(' trim_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("btrim"), $3, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("btrim"),
+ $3,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| NULLIF '(' a_expr ',' a_expr ')'
{
@@ -13923,7 +13993,10 @@ func_expr_common_subexpr:
{
/* xmlexists(A PASSING [BY REF] B [BY REF]) is
* converted to xmlexists(A, B)*/
- $$ = (Node *) makeFuncCall(SystemFuncName("xmlexists"), list_make2($3, $4), @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("xmlexists"),
+ list_make2($3, $4),
+ COERCE_SQL_SYNTAX,
+ @1);
}
| XMLFOREST '(' xml_attribute_list ')'
{
@@ -14453,10 +14526,10 @@ extract_arg:
;
unicode_normal_form:
- NFC { $$ = "nfc"; }
- | NFD { $$ = "nfd"; }
- | NFKC { $$ = "nfkc"; }
- | NFKD { $$ = "nfkd"; }
+ NFC { $$ = "NFC"; }
+ | NFD { $$ = "NFD"; }
+ | NFKC { $$ = "NFKC"; }
+ | NFKD { $$ = "NFKD"; }
;
/* OVERLAY() arguments */
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 7460e61160..ea4a1f5aeb 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -541,10 +541,11 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
list_length(fc->args) > 1 &&
fc->agg_order == NIL &&
fc->agg_filter == NULL &&
+ fc->over == NULL &&
!fc->agg_star &&
!fc->agg_distinct &&
!fc->func_variadic &&
- fc->over == NULL &&
+ fc->funcformat == COERCE_EXPLICIT_CALL &&
coldeflist == NIL)
{
ListCell *lc;
@@ -558,6 +559,7 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
newfc = makeFuncCall(SystemFuncName("unnest"),
list_make1(arg),
+ COERCE_EXPLICIT_CALL,
fc->location);
newfexpr = transformExpr(pstate, (Node *) newfc,
diff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c
index a7a31704fb..8b4e3ca5e1 100644
--- a/src/backend/parser/parse_func.c
+++ b/src/backend/parser/parse_func.c
@@ -91,11 +91,12 @@ ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs,
bool is_column = (fn == NULL);
List *agg_order = (fn ? fn->agg_order : NIL);
Expr *agg_filter = NULL;
+ WindowDef *over = (fn ? fn->over : NULL);
bool agg_within_group = (fn ? fn->agg_within_group : false);
bool agg_star = (fn ? fn->agg_star : false);
bool agg_distinct = (fn ? fn->agg_distinct : false);
bool func_variadic = (fn ? fn->func_variadic : false);
- WindowDef *over = (fn ? fn->over : NULL);
+ CoercionForm funcformat = (fn ? fn->funcformat : COERCE_EXPLICIT_CALL);
bool could_be_projection;
Oid rettype;
Oid funcid;
@@ -221,6 +222,7 @@ ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs,
agg_order == NIL && agg_filter == NULL &&
!agg_star && !agg_distinct && over == NULL &&
!func_variadic && argnames == NIL &&
+ funcformat == COERCE_EXPLICIT_CALL &&
list_length(funcname) == 1 &&
(actual_arg_types[0] == RECORDOID ||
ISCOMPLEX(actual_arg_types[0])));
@@ -742,7 +744,7 @@ ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs,
funcexpr->funcresulttype = rettype;
funcexpr->funcretset = retset;
funcexpr->funcvariadic = func_variadic;
- funcexpr->funcformat = COERCE_EXPLICIT_CALL;
+ funcexpr->funcformat = funcformat;
/* funccollid and inputcollid will be set by parse_collate.c */
funcexpr->args = fargs;
funcexpr->location = location;
diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c
index 015b0538e3..254c0f65c2 100644
--- a/src/backend/parser/parse_utilcmd.c
+++ b/src/backend/parser/parse_utilcmd.c
@@ -604,6 +604,7 @@ transformColumnDefinition(CreateStmtContext *cxt, ColumnDef *column)
castnode->location = -1;
funccallnode = makeFuncCall(SystemFuncName("nextval"),
list_make1(castnode),
+ COERCE_EXPLICIT_CALL,
-1);
constraint = makeNode(Constraint);
constraint->contype = CONSTR_DEFAULT;
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 6c656586e8..407719e494 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -443,6 +443,7 @@ static void get_agg_expr(Aggref *aggref, deparse_context *context,
static void get_agg_combine_expr(Node *node, deparse_context *context,
void *callback_arg);
static void get_windowfunc_expr(WindowFunc *wfunc, deparse_context *context);
+static bool get_func_sql_syntax(FuncExpr *expr, deparse_context *context);
static void get_coercion_expr(Node *arg, deparse_context *context,
Oid resulttype, int32 resulttypmod,
Node *parentNode);
@@ -9159,7 +9160,8 @@ looks_like_function(Node *node)
{
case T_FuncExpr:
/* OK, unless it's going to deparse as a cast */
- return (((FuncExpr *) node)->funcformat == COERCE_EXPLICIT_CALL);
+ return (((FuncExpr *) node)->funcformat == COERCE_EXPLICIT_CALL ||
+ ((FuncExpr *) node)->funcformat == COERCE_SQL_SYNTAX);
case T_NullIfExpr:
case T_CoalesceExpr:
case T_MinMaxExpr:
@@ -9261,6 +9263,17 @@ get_func_expr(FuncExpr *expr, deparse_context *context,
return;
}
+ /*
+ * If the function was called using one of the SQL spec's random special
+ * syntaxes, try to reproduce that. If we don't recognize the function,
+ * fall through.
+ */
+ if (expr->funcformat == COERCE_SQL_SYNTAX)
+ {
+ if (get_func_sql_syntax(expr, context))
+ return;
+ }
+
/*
* Normal function: display as proname(args). First we need to extract
* the argument datatypes.
@@ -9496,6 +9509,204 @@ get_windowfunc_expr(WindowFunc *wfunc, deparse_context *context)
}
}
+/*
+ * get_func_sql_syntax - Parse back a SQL-syntax function call
+ *
+ * Returns true if we successfully deparsed, false if we did not
+ * recognize the function.
+ */
+static bool
+get_func_sql_syntax(FuncExpr *expr, deparse_context *context)
+{
+ StringInfo buf = context->buf;
+ Oid funcoid = expr->funcid;
+
+ switch (funcoid)
+ {
+ case F_TIMESTAMP_IZONE:
+ case F_TIMESTAMP_ZONE:
+ case F_TIMESTAMPTZ_IZONE:
+ case F_TIMESTAMPTZ_ZONE:
+ case F_TIMETZ_IZONE:
+ case F_TIMETZ_ZONE:
+ /* AT TIME ZONE ... note reversed argument order */
+ appendStringInfoChar(buf, '(');
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, " AT TIME ZONE ");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ /* XXX this fails to cover all built-in "overlaps" functions */
+ case F_OVERLAPS_TIME:
+ case F_OVERLAPS_TIMESTAMP:
+ case F_OVERLAPS_TIMETZ:
+ /* (x1, x2) OVERLAPS (y1, y2) */
+ appendStringInfoString(buf, "((");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, ", ");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, ") OVERLAPS (");
+ get_rule_expr((Node *) lthird(expr->args), context, false);
+ appendStringInfoString(buf, ", ");
+ get_rule_expr((Node *) lfourth(expr->args), context, false);
+ appendStringInfoString(buf, "))");
+ return true;
+
+ case F_UNICODE_IS_NORMALIZED:
+ /* IS xxx NORMALIZED */
+ appendStringInfoString(buf, "((");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, ") IS");
+ if (list_length(expr->args) == 2)
+ {
+ Const *con = (Const *) lsecond(expr->args);
+
+ Assert(IsA(con, Const) &&
+ con->consttype == TEXTOID &&
+ !con->constisnull);
+ appendStringInfo(buf, " %s",
+ TextDatumGetCString(con->constvalue));
+ }
+ appendStringInfoString(buf, " NORMALIZED)");
+ return true;
+
+ case F_PG_COLLATION_FOR:
+ /* COLLATION FOR */
+ appendStringInfoString(buf, "COLLATION FOR (");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ /*
+ * XXX EXTRACT, a/k/a date_part(), is intentionally not covered
+ * yet. Add it after we change the return type to numeric.
+ */
+
+ case F_UNICODE_NORMALIZE_FUNC:
+ /* NORMALIZE() */
+ appendStringInfoString(buf, "NORMALIZE(");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ if (list_length(expr->args) == 2)
+ {
+ Const *con = (Const *) lsecond(expr->args);
+
+ Assert(IsA(con, Const) &&
+ con->consttype == TEXTOID &&
+ !con->constisnull);
+ appendStringInfo(buf, ", %s",
+ TextDatumGetCString(con->constvalue));
+ }
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_BITOVERLAY:
+ case F_BITOVERLAY_NO_LEN:
+ case F_BYTEAOVERLAY:
+ case F_BYTEAOVERLAY_NO_LEN:
+ case F_TEXTOVERLAY:
+ case F_TEXTOVERLAY_NO_LEN:
+ /* OVERLAY() */
+ appendStringInfoString(buf, "OVERLAY(");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, " PLACING ");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) lthird(expr->args), context, false);
+ if (list_length(expr->args) == 4)
+ {
+ appendStringInfoString(buf, " FOR ");
+ get_rule_expr((Node *) lfourth(expr->args), context, false);
+ }
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_BITPOSITION:
+ case F_BYTEAPOS:
+ case F_TEXTPOS:
+ /* POSITION() ... extra parens since args are b_expr not a_expr */
+ appendStringInfoString(buf, "POSITION((");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, ") IN (");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, "))");
+ return true;
+
+ case F_BITSUBSTR:
+ case F_BITSUBSTR_NO_LEN:
+ case F_BYTEA_SUBSTR:
+ case F_BYTEA_SUBSTR_NO_LEN:
+ case F_TEXT_SUBSTR: /* XXX fails, because this is the wrong OID */
+ case F_TEXT_SUBSTR_NO_LEN: /* XXX fails likewise */
+ case F_TEXTREGEXSUBSTR:
+ /* SUBSTRING FROM/FOR */
+ appendStringInfoString(buf, "SUBSTRING(");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ if (list_length(expr->args) == 3)
+ {
+ appendStringInfoString(buf, " FOR ");
+ get_rule_expr((Node *) lthird(expr->args), context, false);
+ }
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_BTRIM:
+ case F_BTRIM1:
+ case F_BYTEATRIM:
+ /* TRIM() */
+ appendStringInfoString(buf, "TRIM(BOTH");
+ if (list_length(expr->args) == 2)
+ {
+ appendStringInfoChar(buf, ' ');
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ }
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_LTRIM:
+ case F_LTRIM1:
+ /* TRIM() */
+ appendStringInfoString(buf, "TRIM(LEADING");
+ if (list_length(expr->args) == 2)
+ {
+ appendStringInfoChar(buf, ' ');
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ }
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_RTRIM:
+ case F_RTRIM1:
+ /* TRIM() */
+ appendStringInfoString(buf, "TRIM(TRAILING");
+ if (list_length(expr->args) == 2)
+ {
+ appendStringInfoChar(buf, ' ');
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ }
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_XMLEXISTS:
+ /* XMLEXISTS ... extra parens because args are c_expr */
+ appendStringInfoString(buf, "XMLEXISTS((");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, ") PASSING (");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, "))");
+ return true;
+ }
+ return false;
+}
+
/* ----------
* get_coercion_expr
*
diff --git a/src/include/nodes/makefuncs.h b/src/include/nodes/makefuncs.h
index 31d9aedeeb..7ebd794713 100644
--- a/src/include/nodes/makefuncs.h
+++ b/src/include/nodes/makefuncs.h
@@ -79,7 +79,8 @@ extern ColumnDef *makeColumnDef(const char *colname,
extern FuncExpr *makeFuncExpr(Oid funcid, Oid rettype, List *args,
Oid funccollid, Oid inputcollid, CoercionForm fformat);
-extern FuncCall *makeFuncCall(List *name, List *args, int location);
+extern FuncCall *makeFuncCall(List *name, List *args,
+ CoercionForm funcformat, int location);
extern Expr *make_opclause(Oid opno, Oid opresulttype, bool opretset,
Expr *leftop, Expr *rightop,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index ff584f2955..34090eddf9 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -353,11 +353,12 @@ typedef struct FuncCall
List *args; /* the arguments (list of exprs) */
List *agg_order; /* ORDER BY (list of SortBy) */
Node *agg_filter; /* FILTER clause, if any */
+ struct WindowDef *over; /* OVER clause, if any */
bool agg_within_group; /* ORDER BY appeared in WITHIN GROUP */
bool agg_star; /* argument was really '*' */
bool agg_distinct; /* arguments were labeled DISTINCT */
bool func_variadic; /* last argument was labeled VARIADIC */
- struct WindowDef *over; /* OVER clause, if any */
+ CoercionForm funcformat; /* how to display this node */
int location; /* token location, or -1 if unknown */
} FuncCall;
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index fd65ee8f9c..5b190bb99b 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -445,7 +445,10 @@ typedef enum CoercionContext
} CoercionContext;
/*
- * CoercionForm - how to display a node that could have come from a cast
+ * CoercionForm - how to display a FuncExpr or related node
+ *
+ * "Coercion" is a bit of a misnomer, since this value records other
+ * special syntaxes besides casts, but for now we'll keep this naming.
*
* NB: equal() ignores CoercionForm fields, therefore this *must* not carry
* any semantically significant information. We need that behavior so that
@@ -457,7 +460,8 @@ typedef enum CoercionForm
{
COERCE_EXPLICIT_CALL, /* display as a function call */
COERCE_EXPLICIT_CAST, /* display as an explicit cast */
- COERCE_IMPLICIT_CAST /* implicit cast, so hide it */
+ COERCE_IMPLICIT_CAST, /* implicit cast, so hide it */
+ COERCE_SQL_SYNTAX /* display with SQL-mandated special syntax */
} CoercionForm;
/*
diff --git a/src/test/modules/test_rls_hooks/test_rls_hooks.c b/src/test/modules/test_rls_hooks/test_rls_hooks.c
index 0bfa878a25..c0aaabdcdb 100644
--- a/src/test/modules/test_rls_hooks/test_rls_hooks.c
+++ b/src/test/modules/test_rls_hooks/test_rls_hooks.c
@@ -95,7 +95,10 @@ test_rls_hooks_permissive(CmdType cmdtype, Relation relation)
*/
n = makeFuncCall(list_make2(makeString("pg_catalog"),
- makeString("current_user")), NIL, 0);
+ makeString("current_user")),
+ NIL,
+ COERCE_EXPLICIT_CALL,
+ -1);
c = makeNode(ColumnRef);
c->fields = list_make1(makeString("username"));
@@ -155,7 +158,10 @@ test_rls_hooks_restrictive(CmdType cmdtype, Relation relation)
policy->roles = construct_array(&role, 1, OIDOID, sizeof(Oid), true, TYPALIGN_INT);
n = makeFuncCall(list_make2(makeString("pg_catalog"),
- makeString("current_user")), NIL, 0);
+ makeString("current_user")),
+ NIL,
+ COERCE_EXPLICIT_CALL,
+ -1);
c = makeNode(ColumnRef);
c->fields = list_make1(makeString("supervisor"));
diff --git a/src/test/regress/expected/create_view.out b/src/test/regress/expected/create_view.out
index f10a3a7a12..a3a3b12941 100644
--- a/src/test/regress/expected/create_view.out
+++ b/src/test/regress/expected/create_view.out
@@ -1710,7 +1710,7 @@ select pg_get_viewdef('tt20v', true);
i4.i4, +
i8.i8 +
FROM COALESCE(1, 2) c(c), +
- pg_collation_for('x'::text) col(col), +
+ COLLATION FOR ('x'::text) col(col), +
CURRENT_DATE d(d), +
LOCALTIMESTAMP(3) t(t), +
CAST(1 + 2 AS integer) i4(i4), +
diff --git a/src/test/regress/expected/timestamptz.out b/src/test/regress/expected/timestamptz.out
index 639b50308e..c300965554 100644
--- a/src/test/regress/expected/timestamptz.out
+++ b/src/test/regress/expected/timestamptz.out
@@ -2726,10 +2726,10 @@ create temp table tmptz (f1 timestamptz primary key);
insert into tmptz values ('2017-01-18 00:00+00');
explain (costs off)
select * from tmptz where f1 at time zone 'utc' = '2017-01-18 00:00';
- QUERY PLAN
--------------------------------------------------------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------------------------------------------------------
Seq Scan on tmptz
- Filter: (timezone('utc'::text, f1) = 'Wed Jan 18 00:00:00 2017'::timestamp without time zone)
+ Filter: ((f1 AT TIME ZONE 'utc'::text) = 'Wed Jan 18 00:00:00 2017'::timestamp without time zone)
(2 rows)
select * from tmptz where f1 at time zone 'utc' = '2017-01-18 00:00';
I wrote:
Attached is a draft patch that does this. I'm fairly pleased with it,
but there are some loose ends as described below. As the patch stands,
it reverse-lists all our special-format function call syntaxes
*except* EXTRACT. I left that out since I think we want to apply the
reverse-listing change when we add the numeric-output extraction
functions, as I said upthread.
The main thing that's incomplete here is that the switch on function
OID fails to cover some cases that ought to be covered, as a result
of limitations of Gen_fmgrtab.pl:
Now that 8e1f37c07 fixed that, here's a complete version, with better
test coverage. (I still think we might want to rewrite those SQL
functions as C, but that can be an independent project now.)
Remaining open issues:
* I notice that this will sometimes transform non-SQL-spec syntax
into SQL-spec, for example
# explain verbose select substring(now()::text, 'foo');
QUERY PLAN
-----------------------------------------------------
Result (cost=0.00..0.02 rows=1 width=32)
Output: SUBSTRING((now())::text FROM 'foo'::text)
(2 rows)
I'm not sure that that satisfies the POLA. This particular case is
especially not great, because this is really textregexsubstr() which
is *not* SQL compatible, so the display is more than a bit misleading.
The reason this happens is that we've included expr_list as a variant of
substr_list, so that the func_expr_common_subexpr production has no idea
whether the argument list was really special syntax or not. What I'm
inclined to do, but have not done yet, is to split that apart into
separate variants so that when the SQL-spec decoration is not used we
just generate a perfectly vanilla FuncCall. In fact, I'd sort of argue
that we should not force the function to be sought in pg_catalog in such
a case either. The comments in substr_list claim that we're trying to
allow extension functions named substring(), but using SystemFuncName is
100% hostile to that.
* Still waiting for comments on whether to rename CoercionForm.
regards, tom lane
Attachments:
reverse-list-special-sql-syntaxes-2.patchtext/x-diff; charset=us-ascii; name=reverse-list-special-sql-syntaxes-2.patchDownload
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 530aac68a7..3031c52991 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -2682,11 +2682,12 @@ _copyFuncCall(const FuncCall *from)
COPY_NODE_FIELD(args);
COPY_NODE_FIELD(agg_order);
COPY_NODE_FIELD(agg_filter);
+ COPY_NODE_FIELD(over);
COPY_SCALAR_FIELD(agg_within_group);
COPY_SCALAR_FIELD(agg_star);
COPY_SCALAR_FIELD(agg_distinct);
COPY_SCALAR_FIELD(func_variadic);
- COPY_NODE_FIELD(over);
+ COPY_SCALAR_FIELD(funcformat);
COPY_LOCATION_FIELD(location);
return newnode;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 0cf90ef33c..9aa853748d 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2369,11 +2369,12 @@ _equalFuncCall(const FuncCall *a, const FuncCall *b)
COMPARE_NODE_FIELD(args);
COMPARE_NODE_FIELD(agg_order);
COMPARE_NODE_FIELD(agg_filter);
+ COMPARE_NODE_FIELD(over);
COMPARE_SCALAR_FIELD(agg_within_group);
COMPARE_SCALAR_FIELD(agg_star);
COMPARE_SCALAR_FIELD(agg_distinct);
COMPARE_SCALAR_FIELD(func_variadic);
- COMPARE_NODE_FIELD(over);
+ COMPARE_SCALAR_FIELD(funcformat);
COMPARE_LOCATION_FIELD(location);
return true;
diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c
index 49de285f01..ee033ae779 100644
--- a/src/backend/nodes/makefuncs.c
+++ b/src/backend/nodes/makefuncs.c
@@ -582,7 +582,7 @@ makeDefElemExtended(char *nameSpace, char *name, Node *arg,
* supply. Any non-default parameters have to be inserted by the caller.
*/
FuncCall *
-makeFuncCall(List *name, List *args, int location)
+makeFuncCall(List *name, List *args, CoercionForm funcformat, int location)
{
FuncCall *n = makeNode(FuncCall);
@@ -590,11 +590,12 @@ makeFuncCall(List *name, List *args, int location)
n->args = args;
n->agg_order = NIL;
n->agg_filter = NULL;
+ n->over = NULL;
n->agg_within_group = false;
n->agg_star = false;
n->agg_distinct = false;
n->func_variadic = false;
- n->over = NULL;
+ n->funcformat = funcformat;
n->location = location;
return n;
}
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 7e324c12e2..4504b1503b 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -2765,11 +2765,12 @@ _outFuncCall(StringInfo str, const FuncCall *node)
WRITE_NODE_FIELD(args);
WRITE_NODE_FIELD(agg_order);
WRITE_NODE_FIELD(agg_filter);
+ WRITE_NODE_FIELD(over);
WRITE_BOOL_FIELD(agg_within_group);
WRITE_BOOL_FIELD(agg_star);
WRITE_BOOL_FIELD(agg_distinct);
WRITE_BOOL_FIELD(func_variadic);
- WRITE_NODE_FIELD(over);
+ WRITE_ENUM_FIELD(funcformat, CoercionForm);
WRITE_LOCATION_FIELD(location);
}
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 357ab93fb6..4641050dab 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -12969,6 +12969,7 @@ a_expr: c_expr { $$ = $1; }
{
$$ = (Node *) makeFuncCall(SystemFuncName("timezone"),
list_make2($5, $1),
+ COERCE_SQL_SYNTAX,
@2);
}
/*
@@ -13032,6 +13033,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("like_escape"),
list_make2($3, $5),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_LIKE, "~~",
$1, (Node *) n, @2);
@@ -13045,6 +13047,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("like_escape"),
list_make2($4, $6),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_LIKE, "!~~",
$1, (Node *) n, @2);
@@ -13058,6 +13061,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("like_escape"),
list_make2($3, $5),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_ILIKE, "~~*",
$1, (Node *) n, @2);
@@ -13071,6 +13075,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("like_escape"),
list_make2($4, $6),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_ILIKE, "!~~*",
$1, (Node *) n, @2);
@@ -13080,6 +13085,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("similar_to_escape"),
list_make1($4),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_SIMILAR, "~",
$1, (Node *) n, @2);
@@ -13088,6 +13094,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("similar_to_escape"),
list_make2($4, $6),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_SIMILAR, "~",
$1, (Node *) n, @2);
@@ -13096,6 +13103,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("similar_to_escape"),
list_make1($5),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_SIMILAR, "!~",
$1, (Node *) n, @2);
@@ -13104,6 +13112,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("similar_to_escape"),
list_make2($5, $7),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_SIMILAR, "!~",
$1, (Node *) n, @2);
@@ -13164,6 +13173,7 @@ a_expr: c_expr { $$ = $1; }
parser_errposition(@3)));
$$ = (Node *) makeFuncCall(SystemFuncName("overlaps"),
list_concat($1, $3),
+ COERCE_SQL_SYNTAX,
@2);
}
| a_expr IS TRUE_P %prec IS
@@ -13351,19 +13361,33 @@ a_expr: c_expr { $$ = $1; }
}
| a_expr IS NORMALIZED %prec IS
{
- $$ = (Node *) makeFuncCall(SystemFuncName("is_normalized"), list_make1($1), @2);
+ $$ = (Node *) makeFuncCall(SystemFuncName("is_normalized"),
+ list_make1($1),
+ COERCE_SQL_SYNTAX,
+ @2);
}
| a_expr IS unicode_normal_form NORMALIZED %prec IS
{
- $$ = (Node *) makeFuncCall(SystemFuncName("is_normalized"), list_make2($1, makeStringConst($3, @3)), @2);
+ $$ = (Node *) makeFuncCall(SystemFuncName("is_normalized"),
+ list_make2($1, makeStringConst($3, @3)),
+ COERCE_SQL_SYNTAX,
+ @2);
}
| a_expr IS NOT NORMALIZED %prec IS
{
- $$ = makeNotExpr((Node *) makeFuncCall(SystemFuncName("is_normalized"), list_make1($1), @2), @2);
+ $$ = makeNotExpr((Node *) makeFuncCall(SystemFuncName("is_normalized"),
+ list_make1($1),
+ COERCE_SQL_SYNTAX,
+ @2),
+ @2);
}
| a_expr IS NOT unicode_normal_form NORMALIZED %prec IS
{
- $$ = makeNotExpr((Node *) makeFuncCall(SystemFuncName("is_normalized"), list_make2($1, makeStringConst($4, @4)), @2), @2);
+ $$ = makeNotExpr((Node *) makeFuncCall(SystemFuncName("is_normalized"),
+ list_make2($1, makeStringConst($4, @4)),
+ COERCE_SQL_SYNTAX,
+ @2),
+ @2);
}
| DEFAULT
{
@@ -13613,31 +13637,41 @@ c_expr: columnref { $$ = $1; }
func_application: func_name '(' ')'
{
- $$ = (Node *) makeFuncCall($1, NIL, @1);
+ $$ = (Node *) makeFuncCall($1, NIL,
+ COERCE_EXPLICIT_CALL,
+ @1);
}
| func_name '(' func_arg_list opt_sort_clause ')'
{
- FuncCall *n = makeFuncCall($1, $3, @1);
+ FuncCall *n = makeFuncCall($1, $3,
+ COERCE_EXPLICIT_CALL,
+ @1);
n->agg_order = $4;
$$ = (Node *)n;
}
| func_name '(' VARIADIC func_arg_expr opt_sort_clause ')'
{
- FuncCall *n = makeFuncCall($1, list_make1($4), @1);
+ FuncCall *n = makeFuncCall($1, list_make1($4),
+ COERCE_EXPLICIT_CALL,
+ @1);
n->func_variadic = true;
n->agg_order = $5;
$$ = (Node *)n;
}
| func_name '(' func_arg_list ',' VARIADIC func_arg_expr opt_sort_clause ')'
{
- FuncCall *n = makeFuncCall($1, lappend($3, $6), @1);
+ FuncCall *n = makeFuncCall($1, lappend($3, $6),
+ COERCE_EXPLICIT_CALL,
+ @1);
n->func_variadic = true;
n->agg_order = $7;
$$ = (Node *)n;
}
| func_name '(' ALL func_arg_list opt_sort_clause ')'
{
- FuncCall *n = makeFuncCall($1, $4, @1);
+ FuncCall *n = makeFuncCall($1, $4,
+ COERCE_EXPLICIT_CALL,
+ @1);
n->agg_order = $5;
/* Ideally we'd mark the FuncCall node to indicate
* "must be an aggregate", but there's no provision
@@ -13647,7 +13681,9 @@ func_application: func_name '(' ')'
}
| func_name '(' DISTINCT func_arg_list opt_sort_clause ')'
{
- FuncCall *n = makeFuncCall($1, $4, @1);
+ FuncCall *n = makeFuncCall($1, $4,
+ COERCE_EXPLICIT_CALL,
+ @1);
n->agg_order = $5;
n->agg_distinct = true;
$$ = (Node *)n;
@@ -13664,7 +13700,9 @@ func_application: func_name '(' ')'
* so that later processing can detect what the argument
* really was.
*/
- FuncCall *n = makeFuncCall($1, NIL, @1);
+ FuncCall *n = makeFuncCall($1, NIL,
+ COERCE_EXPLICIT_CALL,
+ @1);
n->agg_star = true;
$$ = (Node *)n;
}
@@ -13738,6 +13776,7 @@ func_expr_common_subexpr:
{
$$ = (Node *) makeFuncCall(SystemFuncName("pg_collation_for"),
list_make1($4),
+ COERCE_SQL_SYNTAX,
@1);
}
| CURRENT_DATE
@@ -13804,31 +13843,49 @@ func_expr_common_subexpr:
{ $$ = makeTypeCast($3, $5, @1); }
| EXTRACT '(' extract_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("date_part"), $3, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("date_part"),
+ $3,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| NORMALIZE '(' a_expr ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("normalize"), list_make1($3), @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("normalize"),
+ list_make1($3),
+ COERCE_SQL_SYNTAX,
+ @1);
}
| NORMALIZE '(' a_expr ',' unicode_normal_form ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("normalize"), list_make2($3, makeStringConst($5, @5)), @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("normalize"),
+ list_make2($3, makeStringConst($5, @5)),
+ COERCE_SQL_SYNTAX,
+ @1);
}
| OVERLAY '(' overlay_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("overlay"), $3, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("overlay"),
+ $3,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| POSITION '(' position_list ')'
{
/* position(A in B) is converted to position(B, A) */
- $$ = (Node *) makeFuncCall(SystemFuncName("position"), $3, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("position"),
+ $3,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| SUBSTRING '(' substr_list ')'
{
/* substring(A from B for C) is converted to
* substring(A, B, C) - thomas 2000-11-28
*/
- $$ = (Node *) makeFuncCall(SystemFuncName("substring"), $3, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("substring"),
+ $3,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| TREAT '(' a_expr AS Typename ')'
{
@@ -13841,28 +13898,41 @@ func_expr_common_subexpr:
* Convert SystemTypeName() to SystemFuncName() even though
* at the moment they result in the same thing.
*/
- $$ = (Node *) makeFuncCall(SystemFuncName(((Value *)llast($5->names))->val.str),
- list_make1($3),
- @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName(((Value *) llast($5->names))->val.str),
+ list_make1($3),
+ COERCE_EXPLICIT_CALL,
+ @1);
}
| TRIM '(' BOTH trim_list ')'
{
/* various trim expressions are defined in SQL
* - thomas 1997-07-19
*/
- $$ = (Node *) makeFuncCall(SystemFuncName("btrim"), $4, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("btrim"),
+ $4,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| TRIM '(' LEADING trim_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("ltrim"), $4, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("ltrim"),
+ $4,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| TRIM '(' TRAILING trim_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("rtrim"), $4, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("rtrim"),
+ $4,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| TRIM '(' trim_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("btrim"), $3, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("btrim"),
+ $3,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| NULLIF '(' a_expr ',' a_expr ')'
{
@@ -13915,7 +13985,10 @@ func_expr_common_subexpr:
{
/* xmlexists(A PASSING [BY REF] B [BY REF]) is
* converted to xmlexists(A, B)*/
- $$ = (Node *) makeFuncCall(SystemFuncName("xmlexists"), list_make2($3, $4), @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("xmlexists"),
+ list_make2($3, $4),
+ COERCE_SQL_SYNTAX,
+ @1);
}
| XMLFOREST '(' xml_attribute_list ')'
{
@@ -14445,10 +14518,10 @@ extract_arg:
;
unicode_normal_form:
- NFC { $$ = "nfc"; }
- | NFD { $$ = "nfd"; }
- | NFKC { $$ = "nfkc"; }
- | NFKD { $$ = "nfkd"; }
+ NFC { $$ = "NFC"; }
+ | NFD { $$ = "NFD"; }
+ | NFKC { $$ = "NFKC"; }
+ | NFKD { $$ = "NFKD"; }
;
/* OVERLAY() arguments */
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 7460e61160..ea4a1f5aeb 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -541,10 +541,11 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
list_length(fc->args) > 1 &&
fc->agg_order == NIL &&
fc->agg_filter == NULL &&
+ fc->over == NULL &&
!fc->agg_star &&
!fc->agg_distinct &&
!fc->func_variadic &&
- fc->over == NULL &&
+ fc->funcformat == COERCE_EXPLICIT_CALL &&
coldeflist == NIL)
{
ListCell *lc;
@@ -558,6 +559,7 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
newfc = makeFuncCall(SystemFuncName("unnest"),
list_make1(arg),
+ COERCE_EXPLICIT_CALL,
fc->location);
newfexpr = transformExpr(pstate, (Node *) newfc,
diff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c
index a7a31704fb..8b4e3ca5e1 100644
--- a/src/backend/parser/parse_func.c
+++ b/src/backend/parser/parse_func.c
@@ -91,11 +91,12 @@ ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs,
bool is_column = (fn == NULL);
List *agg_order = (fn ? fn->agg_order : NIL);
Expr *agg_filter = NULL;
+ WindowDef *over = (fn ? fn->over : NULL);
bool agg_within_group = (fn ? fn->agg_within_group : false);
bool agg_star = (fn ? fn->agg_star : false);
bool agg_distinct = (fn ? fn->agg_distinct : false);
bool func_variadic = (fn ? fn->func_variadic : false);
- WindowDef *over = (fn ? fn->over : NULL);
+ CoercionForm funcformat = (fn ? fn->funcformat : COERCE_EXPLICIT_CALL);
bool could_be_projection;
Oid rettype;
Oid funcid;
@@ -221,6 +222,7 @@ ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs,
agg_order == NIL && agg_filter == NULL &&
!agg_star && !agg_distinct && over == NULL &&
!func_variadic && argnames == NIL &&
+ funcformat == COERCE_EXPLICIT_CALL &&
list_length(funcname) == 1 &&
(actual_arg_types[0] == RECORDOID ||
ISCOMPLEX(actual_arg_types[0])));
@@ -742,7 +744,7 @@ ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs,
funcexpr->funcresulttype = rettype;
funcexpr->funcretset = retset;
funcexpr->funcvariadic = func_variadic;
- funcexpr->funcformat = COERCE_EXPLICIT_CALL;
+ funcexpr->funcformat = funcformat;
/* funccollid and inputcollid will be set by parse_collate.c */
funcexpr->args = fargs;
funcexpr->location = location;
diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c
index 015b0538e3..254c0f65c2 100644
--- a/src/backend/parser/parse_utilcmd.c
+++ b/src/backend/parser/parse_utilcmd.c
@@ -604,6 +604,7 @@ transformColumnDefinition(CreateStmtContext *cxt, ColumnDef *column)
castnode->location = -1;
funccallnode = makeFuncCall(SystemFuncName("nextval"),
list_make1(castnode),
+ COERCE_EXPLICIT_CALL,
-1);
constraint = makeNode(Constraint);
constraint->contype = CONSTR_DEFAULT;
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 28f56074c0..1b2a2d59f0 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -443,6 +443,7 @@ static void get_agg_expr(Aggref *aggref, deparse_context *context,
static void get_agg_combine_expr(Node *node, deparse_context *context,
void *callback_arg);
static void get_windowfunc_expr(WindowFunc *wfunc, deparse_context *context);
+static bool get_func_sql_syntax(FuncExpr *expr, deparse_context *context);
static void get_coercion_expr(Node *arg, deparse_context *context,
Oid resulttype, int32 resulttypmod,
Node *parentNode);
@@ -9155,7 +9156,8 @@ looks_like_function(Node *node)
{
case T_FuncExpr:
/* OK, unless it's going to deparse as a cast */
- return (((FuncExpr *) node)->funcformat == COERCE_EXPLICIT_CALL);
+ return (((FuncExpr *) node)->funcformat == COERCE_EXPLICIT_CALL ||
+ ((FuncExpr *) node)->funcformat == COERCE_SQL_SYNTAX);
case T_NullIfExpr:
case T_CoalesceExpr:
case T_MinMaxExpr:
@@ -9257,6 +9259,17 @@ get_func_expr(FuncExpr *expr, deparse_context *context,
return;
}
+ /*
+ * If the function was called using one of the SQL spec's random special
+ * syntaxes, try to reproduce that. If we don't recognize the function,
+ * fall through.
+ */
+ if (expr->funcformat == COERCE_SQL_SYNTAX)
+ {
+ if (get_func_sql_syntax(expr, context))
+ return;
+ }
+
/*
* Normal function: display as proname(args). First we need to extract
* the argument datatypes.
@@ -9492,6 +9505,224 @@ get_windowfunc_expr(WindowFunc *wfunc, deparse_context *context)
}
}
+/*
+ * get_func_sql_syntax - Parse back a SQL-syntax function call
+ *
+ * Returns true if we successfully deparsed, false if we did not
+ * recognize the function.
+ */
+static bool
+get_func_sql_syntax(FuncExpr *expr, deparse_context *context)
+{
+ StringInfo buf = context->buf;
+ Oid funcoid = expr->funcid;
+
+ switch (funcoid)
+ {
+ case F_TIMEZONE_INTERVAL_TIMESTAMP:
+ case F_TIMEZONE_INTERVAL_TIMESTAMPTZ:
+ case F_TIMEZONE_INTERVAL_TIMETZ:
+ case F_TIMEZONE_TEXT_TIMESTAMP:
+ case F_TIMEZONE_TEXT_TIMESTAMPTZ:
+ case F_TIMEZONE_TEXT_TIMETZ:
+ /* AT TIME ZONE ... note reversed argument order */
+ appendStringInfoChar(buf, '(');
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, " AT TIME ZONE ");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_OVERLAPS_TIMESTAMPTZ_INTERVAL_TIMESTAMPTZ_INTERVAL:
+ case F_OVERLAPS_TIMESTAMPTZ_INTERVAL_TIMESTAMPTZ_TIMESTAMPTZ:
+ case F_OVERLAPS_TIMESTAMPTZ_TIMESTAMPTZ_TIMESTAMPTZ_INTERVAL:
+ case F_OVERLAPS_TIMESTAMPTZ_TIMESTAMPTZ_TIMESTAMPTZ_TIMESTAMPTZ:
+ case F_OVERLAPS_TIMESTAMP_INTERVAL_TIMESTAMP_INTERVAL:
+ case F_OVERLAPS_TIMESTAMP_INTERVAL_TIMESTAMP_TIMESTAMP:
+ case F_OVERLAPS_TIMESTAMP_TIMESTAMP_TIMESTAMP_INTERVAL:
+ case F_OVERLAPS_TIMESTAMP_TIMESTAMP_TIMESTAMP_TIMESTAMP:
+ case F_OVERLAPS_TIMETZ_TIMETZ_TIMETZ_TIMETZ:
+ case F_OVERLAPS_TIME_INTERVAL_TIME_INTERVAL:
+ case F_OVERLAPS_TIME_INTERVAL_TIME_TIME:
+ case F_OVERLAPS_TIME_TIME_TIME_INTERVAL:
+ case F_OVERLAPS_TIME_TIME_TIME_TIME:
+ /* (x1, x2) OVERLAPS (y1, y2) */
+ appendStringInfoString(buf, "((");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, ", ");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, ") OVERLAPS (");
+ get_rule_expr((Node *) lthird(expr->args), context, false);
+ appendStringInfoString(buf, ", ");
+ get_rule_expr((Node *) lfourth(expr->args), context, false);
+ appendStringInfoString(buf, "))");
+ return true;
+
+ case F_IS_NORMALIZED:
+ /* IS xxx NORMALIZED */
+ appendStringInfoString(buf, "((");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, ") IS");
+ if (list_length(expr->args) == 2)
+ {
+ Const *con = (Const *) lsecond(expr->args);
+
+ Assert(IsA(con, Const) &&
+ con->consttype == TEXTOID &&
+ !con->constisnull);
+ appendStringInfo(buf, " %s",
+ TextDatumGetCString(con->constvalue));
+ }
+ appendStringInfoString(buf, " NORMALIZED)");
+ return true;
+
+ case F_PG_COLLATION_FOR:
+ /* COLLATION FOR */
+ appendStringInfoString(buf, "COLLATION FOR (");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ /*
+ * XXX EXTRACT, a/k/a date_part(), is intentionally not covered
+ * yet. Add it after we change the return type to numeric.
+ */
+
+ case F_NORMALIZE:
+ /* NORMALIZE() */
+ appendStringInfoString(buf, "NORMALIZE(");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ if (list_length(expr->args) == 2)
+ {
+ Const *con = (Const *) lsecond(expr->args);
+
+ Assert(IsA(con, Const) &&
+ con->consttype == TEXTOID &&
+ !con->constisnull);
+ appendStringInfo(buf, ", %s",
+ TextDatumGetCString(con->constvalue));
+ }
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_OVERLAY_BIT_BIT_INT4:
+ case F_OVERLAY_BIT_BIT_INT4_INT4:
+ case F_OVERLAY_BYTEA_BYTEA_INT4:
+ case F_OVERLAY_BYTEA_BYTEA_INT4_INT4:
+ case F_OVERLAY_TEXT_TEXT_INT4:
+ case F_OVERLAY_TEXT_TEXT_INT4_INT4:
+ /* OVERLAY() */
+ appendStringInfoString(buf, "OVERLAY(");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, " PLACING ");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) lthird(expr->args), context, false);
+ if (list_length(expr->args) == 4)
+ {
+ appendStringInfoString(buf, " FOR ");
+ get_rule_expr((Node *) lfourth(expr->args), context, false);
+ }
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_POSITION_BIT_BIT:
+ case F_POSITION_BYTEA_BYTEA:
+ case F_POSITION_TEXT_TEXT:
+ /* POSITION() ... extra parens since args are b_expr not a_expr */
+ appendStringInfoString(buf, "POSITION((");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, ") IN (");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, "))");
+ return true;
+
+ case F_SUBSTRING_BIT_INT4:
+ case F_SUBSTRING_BIT_INT4_INT4:
+ case F_SUBSTRING_BYTEA_INT4:
+ case F_SUBSTRING_BYTEA_INT4_INT4:
+ case F_SUBSTRING_TEXT_INT4:
+ case F_SUBSTRING_TEXT_INT4_INT4:
+ case F_SUBSTRING_TEXT_TEXT:
+ /* SUBSTRING FROM/FOR */
+ appendStringInfoString(buf, "SUBSTRING(");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ if (list_length(expr->args) == 3)
+ {
+ appendStringInfoString(buf, " FOR ");
+ get_rule_expr((Node *) lthird(expr->args), context, false);
+ }
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_SUBSTRING_TEXT_TEXT_TEXT:
+ /* SUBSTRING SIMILAR/ESCAPE */
+ appendStringInfoString(buf, "SUBSTRING(");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, " SIMILAR ");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, " ESCAPE ");
+ get_rule_expr((Node *) lthird(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_BTRIM_BYTEA_BYTEA:
+ case F_BTRIM_TEXT:
+ case F_BTRIM_TEXT_TEXT:
+ /* TRIM() */
+ appendStringInfoString(buf, "TRIM(BOTH");
+ if (list_length(expr->args) == 2)
+ {
+ appendStringInfoChar(buf, ' ');
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ }
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_LTRIM_TEXT:
+ case F_LTRIM_TEXT_TEXT:
+ /* TRIM() */
+ appendStringInfoString(buf, "TRIM(LEADING");
+ if (list_length(expr->args) == 2)
+ {
+ appendStringInfoChar(buf, ' ');
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ }
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_RTRIM_TEXT:
+ case F_RTRIM_TEXT_TEXT:
+ /* TRIM() */
+ appendStringInfoString(buf, "TRIM(TRAILING");
+ if (list_length(expr->args) == 2)
+ {
+ appendStringInfoChar(buf, ' ');
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ }
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_XMLEXISTS:
+ /* XMLEXISTS ... extra parens because args are c_expr */
+ appendStringInfoString(buf, "XMLEXISTS((");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, ") PASSING (");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, "))");
+ return true;
+ }
+ return false;
+}
+
/* ----------
* get_coercion_expr
*
diff --git a/src/include/nodes/makefuncs.h b/src/include/nodes/makefuncs.h
index 31d9aedeeb..7ebd794713 100644
--- a/src/include/nodes/makefuncs.h
+++ b/src/include/nodes/makefuncs.h
@@ -79,7 +79,8 @@ extern ColumnDef *makeColumnDef(const char *colname,
extern FuncExpr *makeFuncExpr(Oid funcid, Oid rettype, List *args,
Oid funccollid, Oid inputcollid, CoercionForm fformat);
-extern FuncCall *makeFuncCall(List *name, List *args, int location);
+extern FuncCall *makeFuncCall(List *name, List *args,
+ CoercionForm funcformat, int location);
extern Expr *make_opclause(Oid opno, Oid opresulttype, bool opretset,
Expr *leftop, Expr *rightop,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index e1aeea2560..80e2aba369 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -353,11 +353,12 @@ typedef struct FuncCall
List *args; /* the arguments (list of exprs) */
List *agg_order; /* ORDER BY (list of SortBy) */
Node *agg_filter; /* FILTER clause, if any */
+ struct WindowDef *over; /* OVER clause, if any */
bool agg_within_group; /* ORDER BY appeared in WITHIN GROUP */
bool agg_star; /* argument was really '*' */
bool agg_distinct; /* arguments were labeled DISTINCT */
bool func_variadic; /* last argument was labeled VARIADIC */
- struct WindowDef *over; /* OVER clause, if any */
+ CoercionForm funcformat; /* how to display this node */
int location; /* token location, or -1 if unknown */
} FuncCall;
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index fd65ee8f9c..5b190bb99b 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -445,7 +445,10 @@ typedef enum CoercionContext
} CoercionContext;
/*
- * CoercionForm - how to display a node that could have come from a cast
+ * CoercionForm - how to display a FuncExpr or related node
+ *
+ * "Coercion" is a bit of a misnomer, since this value records other
+ * special syntaxes besides casts, but for now we'll keep this naming.
*
* NB: equal() ignores CoercionForm fields, therefore this *must* not carry
* any semantically significant information. We need that behavior so that
@@ -457,7 +460,8 @@ typedef enum CoercionForm
{
COERCE_EXPLICIT_CALL, /* display as a function call */
COERCE_EXPLICIT_CAST, /* display as an explicit cast */
- COERCE_IMPLICIT_CAST /* implicit cast, so hide it */
+ COERCE_IMPLICIT_CAST, /* implicit cast, so hide it */
+ COERCE_SQL_SYNTAX /* display with SQL-mandated special syntax */
} CoercionForm;
/*
diff --git a/src/test/modules/test_rls_hooks/test_rls_hooks.c b/src/test/modules/test_rls_hooks/test_rls_hooks.c
index 0bfa878a25..c0aaabdcdb 100644
--- a/src/test/modules/test_rls_hooks/test_rls_hooks.c
+++ b/src/test/modules/test_rls_hooks/test_rls_hooks.c
@@ -95,7 +95,10 @@ test_rls_hooks_permissive(CmdType cmdtype, Relation relation)
*/
n = makeFuncCall(list_make2(makeString("pg_catalog"),
- makeString("current_user")), NIL, 0);
+ makeString("current_user")),
+ NIL,
+ COERCE_EXPLICIT_CALL,
+ -1);
c = makeNode(ColumnRef);
c->fields = list_make1(makeString("username"));
@@ -155,7 +158,10 @@ test_rls_hooks_restrictive(CmdType cmdtype, Relation relation)
policy->roles = construct_array(&role, 1, OIDOID, sizeof(Oid), true, TYPALIGN_INT);
n = makeFuncCall(list_make2(makeString("pg_catalog"),
- makeString("current_user")), NIL, 0);
+ makeString("current_user")),
+ NIL,
+ COERCE_EXPLICIT_CALL,
+ -1);
c = makeNode(ColumnRef);
c->fields = list_make1(makeString("supervisor"));
diff --git a/src/test/regress/expected/create_view.out b/src/test/regress/expected/create_view.out
index f10a3a7a12..465a0a4da1 100644
--- a/src/test/regress/expected/create_view.out
+++ b/src/test/regress/expected/create_view.out
@@ -1710,13 +1710,50 @@ select pg_get_viewdef('tt20v', true);
i4.i4, +
i8.i8 +
FROM COALESCE(1, 2) c(c), +
- pg_collation_for('x'::text) col(col), +
+ COLLATION FOR ('x'::text) col(col), +
CURRENT_DATE d(d), +
LOCALTIMESTAMP(3) t(t), +
CAST(1 + 2 AS integer) i4(i4), +
CAST((1 + 2)::bigint AS bigint) i8(i8);
(1 row)
+-- reverse-listing of various special function syntaxes required by SQL
+create view tt201v as
+select
+ extract(day from now()) as extr,
+ (now(), '1 day'::interval) overlaps
+ (current_timestamp(2), '1 day'::interval) as o,
+ 'foo' is normalized isn,
+ 'foo' is nfkc normalized isnn,
+ normalize('foo') as n,
+ normalize('foo', nfkd) as nfkd,
+ overlay('foo' placing 'bar' from 2) as ovl,
+ overlay('foo' placing 'bar' from 2 for 3) as ovl2,
+ position('foo' in 'foobar') as p,
+ substring('foo' from 2 for 3) as s,
+ substring('foo' similar 'f' escape '#') as ss,
+ trim(' ' from ' foo ') as bt,
+ trim(leading ' ' from ' foo ') as lt,
+ trim(trailing ' foo ') as rt;
+select pg_get_viewdef('tt201v', true);
+ pg_get_viewdef
+-----------------------------------------------------------------------------------------------
+ SELECT date_part('day'::text, now()) AS extr, +
+ ((now(), '@ 1 day'::interval) OVERLAPS (CURRENT_TIMESTAMP(2), '@ 1 day'::interval)) AS o,+
+ (('foo'::text) IS NORMALIZED) AS isn, +
+ (('foo'::text) IS NFKC NORMALIZED) AS isnn, +
+ NORMALIZE('foo'::text) AS n, +
+ NORMALIZE('foo'::text, NFKD) AS nfkd, +
+ OVERLAY('foo'::text PLACING 'bar'::text FROM 2) AS ovl, +
+ OVERLAY('foo'::text PLACING 'bar'::text FROM 2 FOR 3) AS ovl2, +
+ POSITION(('foo'::text) IN ('foobar'::text)) AS p, +
+ SUBSTRING('foo'::text FROM 2 FOR 3) AS s, +
+ SUBSTRING('foo'::text SIMILAR 'f'::text ESCAPE '#'::text) AS ss, +
+ TRIM(BOTH ' '::text FROM ' foo '::text) AS bt, +
+ TRIM(LEADING ' '::text FROM ' foo '::text) AS lt, +
+ TRIM(TRAILING FROM ' foo '::text) AS rt;
+(1 row)
+
-- corner cases with empty join conditions
create view tt21v as
select * from tt5 natural inner join tt6;
@@ -1904,7 +1941,7 @@ drop cascades to view aliased_view_2
drop cascades to view aliased_view_3
drop cascades to view aliased_view_4
DROP SCHEMA testviewschm2 CASCADE;
-NOTICE: drop cascades to 67 other objects
+NOTICE: drop cascades to 68 other objects
DETAIL: drop cascades to table t1
drop cascades to view temporal1
drop cascades to view temporal2
@@ -1966,6 +2003,7 @@ drop cascades to view tt17v
drop cascades to view tt18v
drop cascades to view tt19v
drop cascades to view tt20v
+drop cascades to view tt201v
drop cascades to view tt21v
drop cascades to view tt22v
drop cascades to view tt23v
diff --git a/src/test/regress/expected/timestamptz.out b/src/test/regress/expected/timestamptz.out
index 639b50308e..c300965554 100644
--- a/src/test/regress/expected/timestamptz.out
+++ b/src/test/regress/expected/timestamptz.out
@@ -2726,10 +2726,10 @@ create temp table tmptz (f1 timestamptz primary key);
insert into tmptz values ('2017-01-18 00:00+00');
explain (costs off)
select * from tmptz where f1 at time zone 'utc' = '2017-01-18 00:00';
- QUERY PLAN
--------------------------------------------------------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------------------------------------------------------
Seq Scan on tmptz
- Filter: (timezone('utc'::text, f1) = 'Wed Jan 18 00:00:00 2017'::timestamp without time zone)
+ Filter: ((f1 AT TIME ZONE 'utc'::text) = 'Wed Jan 18 00:00:00 2017'::timestamp without time zone)
(2 rows)
select * from tmptz where f1 at time zone 'utc' = '2017-01-18 00:00';
diff --git a/src/test/regress/sql/create_view.sql b/src/test/regress/sql/create_view.sql
index e7af0bf2fa..62d1586ea0 100644
--- a/src/test/regress/sql/create_view.sql
+++ b/src/test/regress/sql/create_view.sql
@@ -586,6 +586,27 @@ select * from
cast(1+2 as int8) as i8;
select pg_get_viewdef('tt20v', true);
+-- reverse-listing of various special function syntaxes required by SQL
+
+create view tt201v as
+select
+ extract(day from now()) as extr,
+ (now(), '1 day'::interval) overlaps
+ (current_timestamp(2), '1 day'::interval) as o,
+ 'foo' is normalized isn,
+ 'foo' is nfkc normalized isnn,
+ normalize('foo') as n,
+ normalize('foo', nfkd) as nfkd,
+ overlay('foo' placing 'bar' from 2) as ovl,
+ overlay('foo' placing 'bar' from 2 for 3) as ovl2,
+ position('foo' in 'foobar') as p,
+ substring('foo' from 2 for 3) as s,
+ substring('foo' similar 'f' escape '#') as ss,
+ trim(' ' from ' foo ') as bt,
+ trim(leading ' ' from ' foo ') as lt,
+ trim(trailing ' foo ') as rt;
+select pg_get_viewdef('tt201v', true);
+
-- corner cases with empty join conditions
create view tt21v as
I wrote:
* I notice that this will sometimes transform non-SQL-spec syntax
into SQL-spec, for example ...
I'm not sure that that satisfies the POLA. This particular case is
especially not great, because this is really textregexsubstr() which
is *not* SQL compatible, so the display is more than a bit misleading.
Actually, the problem there is that I made ruleutils.c willing to
reverse-list textregexsubstr() in SQL syntax, which it really shouldn't
since there is no such function per SQL. So deleting that "case" value
is enough to fix most of the problem. Still:
... In fact, I'd sort of argue
that we should not force the function to be sought in pg_catalog in such
a case either. The comments in substr_list claim that we're trying to
allow extension functions named substring(), but using SystemFuncName is
100% hostile to that.
... this seems like a reasonable argument. However, in the attached
I only did that for SUBSTRING and OVERLAY. I had thought of doing
it for POSITION and TRIM, but both of those are weird enough that
allowing a "normal function call" seems error-prone. For example,
the fact that TRIM(expr_list) works out as a call to btrim() is a mess,
but I don't think we can change it. (But of course you can still call a
user-defined trim() function if you double-quote the function name.)
I did get rid of the empty variant for position_list, which AFAICS
has no value except adding confusion: there are no zero-argument
functions named "position" in pg_catalog.
I feel like this is committable at this point --- any objections?
regards, tom lane
Attachments:
reverse-list-special-sql-syntaxes-3.patchtext/x-diff; charset=us-ascii; name=reverse-list-special-sql-syntaxes-3.patchDownload
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 530aac68a7..3031c52991 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -2682,11 +2682,12 @@ _copyFuncCall(const FuncCall *from)
COPY_NODE_FIELD(args);
COPY_NODE_FIELD(agg_order);
COPY_NODE_FIELD(agg_filter);
+ COPY_NODE_FIELD(over);
COPY_SCALAR_FIELD(agg_within_group);
COPY_SCALAR_FIELD(agg_star);
COPY_SCALAR_FIELD(agg_distinct);
COPY_SCALAR_FIELD(func_variadic);
- COPY_NODE_FIELD(over);
+ COPY_SCALAR_FIELD(funcformat);
COPY_LOCATION_FIELD(location);
return newnode;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 0cf90ef33c..9aa853748d 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2369,11 +2369,12 @@ _equalFuncCall(const FuncCall *a, const FuncCall *b)
COMPARE_NODE_FIELD(args);
COMPARE_NODE_FIELD(agg_order);
COMPARE_NODE_FIELD(agg_filter);
+ COMPARE_NODE_FIELD(over);
COMPARE_SCALAR_FIELD(agg_within_group);
COMPARE_SCALAR_FIELD(agg_star);
COMPARE_SCALAR_FIELD(agg_distinct);
COMPARE_SCALAR_FIELD(func_variadic);
- COMPARE_NODE_FIELD(over);
+ COMPARE_SCALAR_FIELD(funcformat);
COMPARE_LOCATION_FIELD(location);
return true;
diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c
index 49de285f01..ee033ae779 100644
--- a/src/backend/nodes/makefuncs.c
+++ b/src/backend/nodes/makefuncs.c
@@ -582,7 +582,7 @@ makeDefElemExtended(char *nameSpace, char *name, Node *arg,
* supply. Any non-default parameters have to be inserted by the caller.
*/
FuncCall *
-makeFuncCall(List *name, List *args, int location)
+makeFuncCall(List *name, List *args, CoercionForm funcformat, int location)
{
FuncCall *n = makeNode(FuncCall);
@@ -590,11 +590,12 @@ makeFuncCall(List *name, List *args, int location)
n->args = args;
n->agg_order = NIL;
n->agg_filter = NULL;
+ n->over = NULL;
n->agg_within_group = false;
n->agg_star = false;
n->agg_distinct = false;
n->func_variadic = false;
- n->over = NULL;
+ n->funcformat = funcformat;
n->location = location;
return n;
}
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 7e324c12e2..4504b1503b 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -2765,11 +2765,12 @@ _outFuncCall(StringInfo str, const FuncCall *node)
WRITE_NODE_FIELD(args);
WRITE_NODE_FIELD(agg_order);
WRITE_NODE_FIELD(agg_filter);
+ WRITE_NODE_FIELD(over);
WRITE_BOOL_FIELD(agg_within_group);
WRITE_BOOL_FIELD(agg_star);
WRITE_BOOL_FIELD(agg_distinct);
WRITE_BOOL_FIELD(func_variadic);
- WRITE_NODE_FIELD(over);
+ WRITE_ENUM_FIELD(funcformat, CoercionForm);
WRITE_LOCATION_FIELD(location);
}
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 357ab93fb6..df263e4fdd 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -490,7 +490,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
%type <list> rowsfrom_item rowsfrom_list opt_col_def_list
%type <boolean> opt_ordinality
%type <list> ExclusionConstraintList ExclusionConstraintElem
-%type <list> func_arg_list
+%type <list> func_arg_list func_arg_list_opt
%type <node> func_arg_expr
%type <list> row explicit_row implicit_row type_list array_expr_list
%type <node> case_expr case_arg when_clause case_default
@@ -12969,6 +12969,7 @@ a_expr: c_expr { $$ = $1; }
{
$$ = (Node *) makeFuncCall(SystemFuncName("timezone"),
list_make2($5, $1),
+ COERCE_SQL_SYNTAX,
@2);
}
/*
@@ -13032,6 +13033,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("like_escape"),
list_make2($3, $5),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_LIKE, "~~",
$1, (Node *) n, @2);
@@ -13045,6 +13047,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("like_escape"),
list_make2($4, $6),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_LIKE, "!~~",
$1, (Node *) n, @2);
@@ -13058,6 +13061,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("like_escape"),
list_make2($3, $5),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_ILIKE, "~~*",
$1, (Node *) n, @2);
@@ -13071,6 +13075,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("like_escape"),
list_make2($4, $6),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_ILIKE, "!~~*",
$1, (Node *) n, @2);
@@ -13080,6 +13085,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("similar_to_escape"),
list_make1($4),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_SIMILAR, "~",
$1, (Node *) n, @2);
@@ -13088,6 +13094,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("similar_to_escape"),
list_make2($4, $6),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_SIMILAR, "~",
$1, (Node *) n, @2);
@@ -13096,6 +13103,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("similar_to_escape"),
list_make1($5),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_SIMILAR, "!~",
$1, (Node *) n, @2);
@@ -13104,6 +13112,7 @@ a_expr: c_expr { $$ = $1; }
{
FuncCall *n = makeFuncCall(SystemFuncName("similar_to_escape"),
list_make2($5, $7),
+ COERCE_EXPLICIT_CALL,
@2);
$$ = (Node *) makeSimpleA_Expr(AEXPR_SIMILAR, "!~",
$1, (Node *) n, @2);
@@ -13164,6 +13173,7 @@ a_expr: c_expr { $$ = $1; }
parser_errposition(@3)));
$$ = (Node *) makeFuncCall(SystemFuncName("overlaps"),
list_concat($1, $3),
+ COERCE_SQL_SYNTAX,
@2);
}
| a_expr IS TRUE_P %prec IS
@@ -13351,19 +13361,33 @@ a_expr: c_expr { $$ = $1; }
}
| a_expr IS NORMALIZED %prec IS
{
- $$ = (Node *) makeFuncCall(SystemFuncName("is_normalized"), list_make1($1), @2);
+ $$ = (Node *) makeFuncCall(SystemFuncName("is_normalized"),
+ list_make1($1),
+ COERCE_SQL_SYNTAX,
+ @2);
}
| a_expr IS unicode_normal_form NORMALIZED %prec IS
{
- $$ = (Node *) makeFuncCall(SystemFuncName("is_normalized"), list_make2($1, makeStringConst($3, @3)), @2);
+ $$ = (Node *) makeFuncCall(SystemFuncName("is_normalized"),
+ list_make2($1, makeStringConst($3, @3)),
+ COERCE_SQL_SYNTAX,
+ @2);
}
| a_expr IS NOT NORMALIZED %prec IS
{
- $$ = makeNotExpr((Node *) makeFuncCall(SystemFuncName("is_normalized"), list_make1($1), @2), @2);
+ $$ = makeNotExpr((Node *) makeFuncCall(SystemFuncName("is_normalized"),
+ list_make1($1),
+ COERCE_SQL_SYNTAX,
+ @2),
+ @2);
}
| a_expr IS NOT unicode_normal_form NORMALIZED %prec IS
{
- $$ = makeNotExpr((Node *) makeFuncCall(SystemFuncName("is_normalized"), list_make2($1, makeStringConst($4, @4)), @2), @2);
+ $$ = makeNotExpr((Node *) makeFuncCall(SystemFuncName("is_normalized"),
+ list_make2($1, makeStringConst($4, @4)),
+ COERCE_SQL_SYNTAX,
+ @2),
+ @2);
}
| DEFAULT
{
@@ -13613,31 +13637,41 @@ c_expr: columnref { $$ = $1; }
func_application: func_name '(' ')'
{
- $$ = (Node *) makeFuncCall($1, NIL, @1);
+ $$ = (Node *) makeFuncCall($1, NIL,
+ COERCE_EXPLICIT_CALL,
+ @1);
}
| func_name '(' func_arg_list opt_sort_clause ')'
{
- FuncCall *n = makeFuncCall($1, $3, @1);
+ FuncCall *n = makeFuncCall($1, $3,
+ COERCE_EXPLICIT_CALL,
+ @1);
n->agg_order = $4;
$$ = (Node *)n;
}
| func_name '(' VARIADIC func_arg_expr opt_sort_clause ')'
{
- FuncCall *n = makeFuncCall($1, list_make1($4), @1);
+ FuncCall *n = makeFuncCall($1, list_make1($4),
+ COERCE_EXPLICIT_CALL,
+ @1);
n->func_variadic = true;
n->agg_order = $5;
$$ = (Node *)n;
}
| func_name '(' func_arg_list ',' VARIADIC func_arg_expr opt_sort_clause ')'
{
- FuncCall *n = makeFuncCall($1, lappend($3, $6), @1);
+ FuncCall *n = makeFuncCall($1, lappend($3, $6),
+ COERCE_EXPLICIT_CALL,
+ @1);
n->func_variadic = true;
n->agg_order = $7;
$$ = (Node *)n;
}
| func_name '(' ALL func_arg_list opt_sort_clause ')'
{
- FuncCall *n = makeFuncCall($1, $4, @1);
+ FuncCall *n = makeFuncCall($1, $4,
+ COERCE_EXPLICIT_CALL,
+ @1);
n->agg_order = $5;
/* Ideally we'd mark the FuncCall node to indicate
* "must be an aggregate", but there's no provision
@@ -13647,7 +13681,9 @@ func_application: func_name '(' ')'
}
| func_name '(' DISTINCT func_arg_list opt_sort_clause ')'
{
- FuncCall *n = makeFuncCall($1, $4, @1);
+ FuncCall *n = makeFuncCall($1, $4,
+ COERCE_EXPLICIT_CALL,
+ @1);
n->agg_order = $5;
n->agg_distinct = true;
$$ = (Node *)n;
@@ -13664,7 +13700,9 @@ func_application: func_name '(' ')'
* so that later processing can detect what the argument
* really was.
*/
- FuncCall *n = makeFuncCall($1, NIL, @1);
+ FuncCall *n = makeFuncCall($1, NIL,
+ COERCE_EXPLICIT_CALL,
+ @1);
n->agg_star = true;
$$ = (Node *)n;
}
@@ -13738,6 +13776,7 @@ func_expr_common_subexpr:
{
$$ = (Node *) makeFuncCall(SystemFuncName("pg_collation_for"),
list_make1($4),
+ COERCE_SQL_SYNTAX,
@1);
}
| CURRENT_DATE
@@ -13804,31 +13843,77 @@ func_expr_common_subexpr:
{ $$ = makeTypeCast($3, $5, @1); }
| EXTRACT '(' extract_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("date_part"), $3, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("date_part"),
+ $3,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| NORMALIZE '(' a_expr ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("normalize"), list_make1($3), @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("normalize"),
+ list_make1($3),
+ COERCE_SQL_SYNTAX,
+ @1);
}
| NORMALIZE '(' a_expr ',' unicode_normal_form ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("normalize"), list_make2($3, makeStringConst($5, @5)), @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("normalize"),
+ list_make2($3, makeStringConst($5, @5)),
+ COERCE_SQL_SYNTAX,
+ @1);
}
| OVERLAY '(' overlay_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("overlay"), $3, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("overlay"),
+ $3,
+ COERCE_SQL_SYNTAX,
+ @1);
+ }
+ | OVERLAY '(' func_arg_list_opt ')'
+ {
+ /*
+ * allow functions named overlay() to be called without
+ * special syntax
+ */
+ $$ = (Node *) makeFuncCall(list_make1(makeString("overlay")),
+ $3,
+ COERCE_EXPLICIT_CALL,
+ @1);
}
| POSITION '(' position_list ')'
{
- /* position(A in B) is converted to position(B, A) */
- $$ = (Node *) makeFuncCall(SystemFuncName("position"), $3, @1);
+ /*
+ * position(A in B) is converted to position(B, A)
+ *
+ * We deliberately don't offer a "plain syntax" option
+ * for position(), because the reversal of the arguments
+ * creates too much risk of confusion.
+ */
+ $$ = (Node *) makeFuncCall(SystemFuncName("position"),
+ $3,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| SUBSTRING '(' substr_list ')'
{
/* substring(A from B for C) is converted to
* substring(A, B, C) - thomas 2000-11-28
*/
- $$ = (Node *) makeFuncCall(SystemFuncName("substring"), $3, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("substring"),
+ $3,
+ COERCE_SQL_SYNTAX,
+ @1);
+ }
+ | SUBSTRING '(' func_arg_list_opt ')'
+ {
+ /*
+ * allow functions named substring() to be called without
+ * special syntax
+ */
+ $$ = (Node *) makeFuncCall(list_make1(makeString("substring")),
+ $3,
+ COERCE_EXPLICIT_CALL,
+ @1);
}
| TREAT '(' a_expr AS Typename ')'
{
@@ -13841,28 +13926,41 @@ func_expr_common_subexpr:
* Convert SystemTypeName() to SystemFuncName() even though
* at the moment they result in the same thing.
*/
- $$ = (Node *) makeFuncCall(SystemFuncName(((Value *)llast($5->names))->val.str),
- list_make1($3),
- @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName(((Value *) llast($5->names))->val.str),
+ list_make1($3),
+ COERCE_EXPLICIT_CALL,
+ @1);
}
| TRIM '(' BOTH trim_list ')'
{
/* various trim expressions are defined in SQL
* - thomas 1997-07-19
*/
- $$ = (Node *) makeFuncCall(SystemFuncName("btrim"), $4, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("btrim"),
+ $4,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| TRIM '(' LEADING trim_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("ltrim"), $4, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("ltrim"),
+ $4,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| TRIM '(' TRAILING trim_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("rtrim"), $4, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("rtrim"),
+ $4,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| TRIM '(' trim_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("btrim"), $3, @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("btrim"),
+ $3,
+ COERCE_SQL_SYNTAX,
+ @1);
}
| NULLIF '(' a_expr ',' a_expr ')'
{
@@ -13915,7 +14013,10 @@ func_expr_common_subexpr:
{
/* xmlexists(A PASSING [BY REF] B [BY REF]) is
* converted to xmlexists(A, B)*/
- $$ = (Node *) makeFuncCall(SystemFuncName("xmlexists"), list_make2($3, $4), @1);
+ $$ = (Node *) makeFuncCall(SystemFuncName("xmlexists"),
+ list_make2($3, $4),
+ COERCE_SQL_SYNTAX,
+ @1);
}
| XMLFOREST '(' xml_attribute_list ')'
{
@@ -14399,6 +14500,10 @@ func_arg_expr: a_expr
}
;
+func_arg_list_opt: func_arg_list { $$ = $1; }
+ | /*EMPTY*/ { $$ = NIL; }
+ ;
+
type_list: Typename { $$ = list_make1($1); }
| type_list ',' Typename { $$ = lappend($1, $3); }
;
@@ -14445,10 +14550,10 @@ extract_arg:
;
unicode_normal_form:
- NFC { $$ = "nfc"; }
- | NFD { $$ = "nfd"; }
- | NFKC { $$ = "nfkc"; }
- | NFKD { $$ = "nfkd"; }
+ NFC { $$ = "NFC"; }
+ | NFD { $$ = "NFD"; }
+ | NFKC { $$ = "NFKC"; }
+ | NFKD { $$ = "NFKD"; }
;
/* OVERLAY() arguments */
@@ -14468,29 +14573,24 @@ overlay_list:
/* position_list uses b_expr not a_expr to avoid conflict with general IN */
position_list:
b_expr IN_P b_expr { $$ = list_make2($3, $1); }
- | /*EMPTY*/ { $$ = NIL; }
;
/*
* SUBSTRING() arguments
*
* Note that SQL:1999 has both
- *
* text FROM int FOR int
- *
* and
- *
* text FROM pattern FOR escape
*
* In the parser we map them both to a call to the substring() function and
* rely on type resolution to pick the right one.
*
* In SQL:2003, the second variant was changed to
- *
* text SIMILAR pattern ESCAPE escape
- *
* We could in theory map that to a different function internally, but
- * since we still support the SQL:1999 version, we don't.
+ * since we still support the SQL:1999 version, we don't. However,
+ * ruleutils.c will reverse-list the call in the newer style.
*/
substr_list:
a_expr FROM a_expr FOR a_expr
@@ -14504,6 +14604,13 @@ substr_list:
}
| a_expr FROM a_expr
{
+ /*
+ * Because we aren't restricting data types here, this
+ * syntax can end up resolving to textregexsubstr().
+ * We've historically allowed that to happen, so continue
+ * to accept it. However, ruleutils.c will reverse-list
+ * such a call in regular function call syntax.
+ */
$$ = list_make2($1, $3);
}
| a_expr FOR a_expr
@@ -14527,16 +14634,6 @@ substr_list:
{
$$ = list_make3($1, $3, $5);
}
- /*
- * We also want to support generic substring functions that
- * accept the usual generic list of arguments.
- */
- | expr_list
- {
- $$ = $1;
- }
- | /*EMPTY*/
- { $$ = NIL; }
;
trim_list: a_expr FROM expr_list { $$ = lappend($3, $1); }
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 7460e61160..ea4a1f5aeb 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -541,10 +541,11 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
list_length(fc->args) > 1 &&
fc->agg_order == NIL &&
fc->agg_filter == NULL &&
+ fc->over == NULL &&
!fc->agg_star &&
!fc->agg_distinct &&
!fc->func_variadic &&
- fc->over == NULL &&
+ fc->funcformat == COERCE_EXPLICIT_CALL &&
coldeflist == NIL)
{
ListCell *lc;
@@ -558,6 +559,7 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
newfc = makeFuncCall(SystemFuncName("unnest"),
list_make1(arg),
+ COERCE_EXPLICIT_CALL,
fc->location);
newfexpr = transformExpr(pstate, (Node *) newfc,
diff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c
index a7a31704fb..8b4e3ca5e1 100644
--- a/src/backend/parser/parse_func.c
+++ b/src/backend/parser/parse_func.c
@@ -91,11 +91,12 @@ ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs,
bool is_column = (fn == NULL);
List *agg_order = (fn ? fn->agg_order : NIL);
Expr *agg_filter = NULL;
+ WindowDef *over = (fn ? fn->over : NULL);
bool agg_within_group = (fn ? fn->agg_within_group : false);
bool agg_star = (fn ? fn->agg_star : false);
bool agg_distinct = (fn ? fn->agg_distinct : false);
bool func_variadic = (fn ? fn->func_variadic : false);
- WindowDef *over = (fn ? fn->over : NULL);
+ CoercionForm funcformat = (fn ? fn->funcformat : COERCE_EXPLICIT_CALL);
bool could_be_projection;
Oid rettype;
Oid funcid;
@@ -221,6 +222,7 @@ ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs,
agg_order == NIL && agg_filter == NULL &&
!agg_star && !agg_distinct && over == NULL &&
!func_variadic && argnames == NIL &&
+ funcformat == COERCE_EXPLICIT_CALL &&
list_length(funcname) == 1 &&
(actual_arg_types[0] == RECORDOID ||
ISCOMPLEX(actual_arg_types[0])));
@@ -742,7 +744,7 @@ ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs,
funcexpr->funcresulttype = rettype;
funcexpr->funcretset = retset;
funcexpr->funcvariadic = func_variadic;
- funcexpr->funcformat = COERCE_EXPLICIT_CALL;
+ funcexpr->funcformat = funcformat;
/* funccollid and inputcollid will be set by parse_collate.c */
funcexpr->args = fargs;
funcexpr->location = location;
diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c
index 015b0538e3..254c0f65c2 100644
--- a/src/backend/parser/parse_utilcmd.c
+++ b/src/backend/parser/parse_utilcmd.c
@@ -604,6 +604,7 @@ transformColumnDefinition(CreateStmtContext *cxt, ColumnDef *column)
castnode->location = -1;
funccallnode = makeFuncCall(SystemFuncName("nextval"),
list_make1(castnode),
+ COERCE_EXPLICIT_CALL,
-1);
constraint = makeNode(Constraint);
constraint->contype = CONSTR_DEFAULT;
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 28f56074c0..3fabcca82f 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -443,6 +443,7 @@ static void get_agg_expr(Aggref *aggref, deparse_context *context,
static void get_agg_combine_expr(Node *node, deparse_context *context,
void *callback_arg);
static void get_windowfunc_expr(WindowFunc *wfunc, deparse_context *context);
+static bool get_func_sql_syntax(FuncExpr *expr, deparse_context *context);
static void get_coercion_expr(Node *arg, deparse_context *context,
Oid resulttype, int32 resulttypmod,
Node *parentNode);
@@ -9155,7 +9156,8 @@ looks_like_function(Node *node)
{
case T_FuncExpr:
/* OK, unless it's going to deparse as a cast */
- return (((FuncExpr *) node)->funcformat == COERCE_EXPLICIT_CALL);
+ return (((FuncExpr *) node)->funcformat == COERCE_EXPLICIT_CALL ||
+ ((FuncExpr *) node)->funcformat == COERCE_SQL_SYNTAX);
case T_NullIfExpr:
case T_CoalesceExpr:
case T_MinMaxExpr:
@@ -9257,6 +9259,17 @@ get_func_expr(FuncExpr *expr, deparse_context *context,
return;
}
+ /*
+ * If the function was called using one of the SQL spec's random special
+ * syntaxes, try to reproduce that. If we don't recognize the function,
+ * fall through.
+ */
+ if (expr->funcformat == COERCE_SQL_SYNTAX)
+ {
+ if (get_func_sql_syntax(expr, context))
+ return;
+ }
+
/*
* Normal function: display as proname(args). First we need to extract
* the argument datatypes.
@@ -9492,6 +9505,223 @@ get_windowfunc_expr(WindowFunc *wfunc, deparse_context *context)
}
}
+/*
+ * get_func_sql_syntax - Parse back a SQL-syntax function call
+ *
+ * Returns true if we successfully deparsed, false if we did not
+ * recognize the function.
+ */
+static bool
+get_func_sql_syntax(FuncExpr *expr, deparse_context *context)
+{
+ StringInfo buf = context->buf;
+ Oid funcoid = expr->funcid;
+
+ switch (funcoid)
+ {
+ case F_TIMEZONE_INTERVAL_TIMESTAMP:
+ case F_TIMEZONE_INTERVAL_TIMESTAMPTZ:
+ case F_TIMEZONE_INTERVAL_TIMETZ:
+ case F_TIMEZONE_TEXT_TIMESTAMP:
+ case F_TIMEZONE_TEXT_TIMESTAMPTZ:
+ case F_TIMEZONE_TEXT_TIMETZ:
+ /* AT TIME ZONE ... note reversed argument order */
+ appendStringInfoChar(buf, '(');
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, " AT TIME ZONE ");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_OVERLAPS_TIMESTAMPTZ_INTERVAL_TIMESTAMPTZ_INTERVAL:
+ case F_OVERLAPS_TIMESTAMPTZ_INTERVAL_TIMESTAMPTZ_TIMESTAMPTZ:
+ case F_OVERLAPS_TIMESTAMPTZ_TIMESTAMPTZ_TIMESTAMPTZ_INTERVAL:
+ case F_OVERLAPS_TIMESTAMPTZ_TIMESTAMPTZ_TIMESTAMPTZ_TIMESTAMPTZ:
+ case F_OVERLAPS_TIMESTAMP_INTERVAL_TIMESTAMP_INTERVAL:
+ case F_OVERLAPS_TIMESTAMP_INTERVAL_TIMESTAMP_TIMESTAMP:
+ case F_OVERLAPS_TIMESTAMP_TIMESTAMP_TIMESTAMP_INTERVAL:
+ case F_OVERLAPS_TIMESTAMP_TIMESTAMP_TIMESTAMP_TIMESTAMP:
+ case F_OVERLAPS_TIMETZ_TIMETZ_TIMETZ_TIMETZ:
+ case F_OVERLAPS_TIME_INTERVAL_TIME_INTERVAL:
+ case F_OVERLAPS_TIME_INTERVAL_TIME_TIME:
+ case F_OVERLAPS_TIME_TIME_TIME_INTERVAL:
+ case F_OVERLAPS_TIME_TIME_TIME_TIME:
+ /* (x1, x2) OVERLAPS (y1, y2) */
+ appendStringInfoString(buf, "((");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, ", ");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, ") OVERLAPS (");
+ get_rule_expr((Node *) lthird(expr->args), context, false);
+ appendStringInfoString(buf, ", ");
+ get_rule_expr((Node *) lfourth(expr->args), context, false);
+ appendStringInfoString(buf, "))");
+ return true;
+
+ case F_IS_NORMALIZED:
+ /* IS xxx NORMALIZED */
+ appendStringInfoString(buf, "((");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, ") IS");
+ if (list_length(expr->args) == 2)
+ {
+ Const *con = (Const *) lsecond(expr->args);
+
+ Assert(IsA(con, Const) &&
+ con->consttype == TEXTOID &&
+ !con->constisnull);
+ appendStringInfo(buf, " %s",
+ TextDatumGetCString(con->constvalue));
+ }
+ appendStringInfoString(buf, " NORMALIZED)");
+ return true;
+
+ case F_PG_COLLATION_FOR:
+ /* COLLATION FOR */
+ appendStringInfoString(buf, "COLLATION FOR (");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ /*
+ * XXX EXTRACT, a/k/a date_part(), is intentionally not covered
+ * yet. Add it after we change the return type to numeric.
+ */
+
+ case F_NORMALIZE:
+ /* NORMALIZE() */
+ appendStringInfoString(buf, "NORMALIZE(");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ if (list_length(expr->args) == 2)
+ {
+ Const *con = (Const *) lsecond(expr->args);
+
+ Assert(IsA(con, Const) &&
+ con->consttype == TEXTOID &&
+ !con->constisnull);
+ appendStringInfo(buf, ", %s",
+ TextDatumGetCString(con->constvalue));
+ }
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_OVERLAY_BIT_BIT_INT4:
+ case F_OVERLAY_BIT_BIT_INT4_INT4:
+ case F_OVERLAY_BYTEA_BYTEA_INT4:
+ case F_OVERLAY_BYTEA_BYTEA_INT4_INT4:
+ case F_OVERLAY_TEXT_TEXT_INT4:
+ case F_OVERLAY_TEXT_TEXT_INT4_INT4:
+ /* OVERLAY() */
+ appendStringInfoString(buf, "OVERLAY(");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, " PLACING ");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) lthird(expr->args), context, false);
+ if (list_length(expr->args) == 4)
+ {
+ appendStringInfoString(buf, " FOR ");
+ get_rule_expr((Node *) lfourth(expr->args), context, false);
+ }
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_POSITION_BIT_BIT:
+ case F_POSITION_BYTEA_BYTEA:
+ case F_POSITION_TEXT_TEXT:
+ /* POSITION() ... extra parens since args are b_expr not a_expr */
+ appendStringInfoString(buf, "POSITION((");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, ") IN (");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, "))");
+ return true;
+
+ case F_SUBSTRING_BIT_INT4:
+ case F_SUBSTRING_BIT_INT4_INT4:
+ case F_SUBSTRING_BYTEA_INT4:
+ case F_SUBSTRING_BYTEA_INT4_INT4:
+ case F_SUBSTRING_TEXT_INT4:
+ case F_SUBSTRING_TEXT_INT4_INT4:
+ /* SUBSTRING FROM/FOR (i.e., integer-position variants) */
+ appendStringInfoString(buf, "SUBSTRING(");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ if (list_length(expr->args) == 3)
+ {
+ appendStringInfoString(buf, " FOR ");
+ get_rule_expr((Node *) lthird(expr->args), context, false);
+ }
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_SUBSTRING_TEXT_TEXT_TEXT:
+ /* SUBSTRING SIMILAR/ESCAPE */
+ appendStringInfoString(buf, "SUBSTRING(");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, " SIMILAR ");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, " ESCAPE ");
+ get_rule_expr((Node *) lthird(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_BTRIM_BYTEA_BYTEA:
+ case F_BTRIM_TEXT:
+ case F_BTRIM_TEXT_TEXT:
+ /* TRIM() */
+ appendStringInfoString(buf, "TRIM(BOTH");
+ if (list_length(expr->args) == 2)
+ {
+ appendStringInfoChar(buf, ' ');
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ }
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_LTRIM_TEXT:
+ case F_LTRIM_TEXT_TEXT:
+ /* TRIM() */
+ appendStringInfoString(buf, "TRIM(LEADING");
+ if (list_length(expr->args) == 2)
+ {
+ appendStringInfoChar(buf, ' ');
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ }
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_RTRIM_TEXT:
+ case F_RTRIM_TEXT_TEXT:
+ /* TRIM() */
+ appendStringInfoString(buf, "TRIM(TRAILING");
+ if (list_length(expr->args) == 2)
+ {
+ appendStringInfoChar(buf, ' ');
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ }
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
+ case F_XMLEXISTS:
+ /* XMLEXISTS ... extra parens because args are c_expr */
+ appendStringInfoString(buf, "XMLEXISTS((");
+ get_rule_expr((Node *) linitial(expr->args), context, false);
+ appendStringInfoString(buf, ") PASSING (");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoString(buf, "))");
+ return true;
+ }
+ return false;
+}
+
/* ----------
* get_coercion_expr
*
diff --git a/src/include/nodes/makefuncs.h b/src/include/nodes/makefuncs.h
index 31d9aedeeb..7ebd794713 100644
--- a/src/include/nodes/makefuncs.h
+++ b/src/include/nodes/makefuncs.h
@@ -79,7 +79,8 @@ extern ColumnDef *makeColumnDef(const char *colname,
extern FuncExpr *makeFuncExpr(Oid funcid, Oid rettype, List *args,
Oid funccollid, Oid inputcollid, CoercionForm fformat);
-extern FuncCall *makeFuncCall(List *name, List *args, int location);
+extern FuncCall *makeFuncCall(List *name, List *args,
+ CoercionForm funcformat, int location);
extern Expr *make_opclause(Oid opno, Oid opresulttype, bool opretset,
Expr *leftop, Expr *rightop,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index e1aeea2560..80e2aba369 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -353,11 +353,12 @@ typedef struct FuncCall
List *args; /* the arguments (list of exprs) */
List *agg_order; /* ORDER BY (list of SortBy) */
Node *agg_filter; /* FILTER clause, if any */
+ struct WindowDef *over; /* OVER clause, if any */
bool agg_within_group; /* ORDER BY appeared in WITHIN GROUP */
bool agg_star; /* argument was really '*' */
bool agg_distinct; /* arguments were labeled DISTINCT */
bool func_variadic; /* last argument was labeled VARIADIC */
- struct WindowDef *over; /* OVER clause, if any */
+ CoercionForm funcformat; /* how to display this node */
int location; /* token location, or -1 if unknown */
} FuncCall;
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index fd65ee8f9c..5b190bb99b 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -445,7 +445,10 @@ typedef enum CoercionContext
} CoercionContext;
/*
- * CoercionForm - how to display a node that could have come from a cast
+ * CoercionForm - how to display a FuncExpr or related node
+ *
+ * "Coercion" is a bit of a misnomer, since this value records other
+ * special syntaxes besides casts, but for now we'll keep this naming.
*
* NB: equal() ignores CoercionForm fields, therefore this *must* not carry
* any semantically significant information. We need that behavior so that
@@ -457,7 +460,8 @@ typedef enum CoercionForm
{
COERCE_EXPLICIT_CALL, /* display as a function call */
COERCE_EXPLICIT_CAST, /* display as an explicit cast */
- COERCE_IMPLICIT_CAST /* implicit cast, so hide it */
+ COERCE_IMPLICIT_CAST, /* implicit cast, so hide it */
+ COERCE_SQL_SYNTAX /* display with SQL-mandated special syntax */
} CoercionForm;
/*
diff --git a/src/test/modules/test_rls_hooks/test_rls_hooks.c b/src/test/modules/test_rls_hooks/test_rls_hooks.c
index 0bfa878a25..c0aaabdcdb 100644
--- a/src/test/modules/test_rls_hooks/test_rls_hooks.c
+++ b/src/test/modules/test_rls_hooks/test_rls_hooks.c
@@ -95,7 +95,10 @@ test_rls_hooks_permissive(CmdType cmdtype, Relation relation)
*/
n = makeFuncCall(list_make2(makeString("pg_catalog"),
- makeString("current_user")), NIL, 0);
+ makeString("current_user")),
+ NIL,
+ COERCE_EXPLICIT_CALL,
+ -1);
c = makeNode(ColumnRef);
c->fields = list_make1(makeString("username"));
@@ -155,7 +158,10 @@ test_rls_hooks_restrictive(CmdType cmdtype, Relation relation)
policy->roles = construct_array(&role, 1, OIDOID, sizeof(Oid), true, TYPALIGN_INT);
n = makeFuncCall(list_make2(makeString("pg_catalog"),
- makeString("current_user")), NIL, 0);
+ makeString("current_user")),
+ NIL,
+ COERCE_EXPLICIT_CALL,
+ -1);
c = makeNode(ColumnRef);
c->fields = list_make1(makeString("supervisor"));
diff --git a/src/test/regress/expected/create_view.out b/src/test/regress/expected/create_view.out
index f10a3a7a12..b234d2d4f9 100644
--- a/src/test/regress/expected/create_view.out
+++ b/src/test/regress/expected/create_view.out
@@ -1710,13 +1710,52 @@ select pg_get_viewdef('tt20v', true);
i4.i4, +
i8.i8 +
FROM COALESCE(1, 2) c(c), +
- pg_collation_for('x'::text) col(col), +
+ COLLATION FOR ('x'::text) col(col), +
CURRENT_DATE d(d), +
LOCALTIMESTAMP(3) t(t), +
CAST(1 + 2 AS integer) i4(i4), +
CAST((1 + 2)::bigint AS bigint) i8(i8);
(1 row)
+-- reverse-listing of various special function syntaxes required by SQL
+create view tt201v as
+select
+ extract(day from now()) as extr,
+ (now(), '1 day'::interval) overlaps
+ (current_timestamp(2), '1 day'::interval) as o,
+ 'foo' is normalized isn,
+ 'foo' is nfkc normalized isnn,
+ normalize('foo') as n,
+ normalize('foo', nfkd) as nfkd,
+ overlay('foo' placing 'bar' from 2) as ovl,
+ overlay('foo' placing 'bar' from 2 for 3) as ovl2,
+ position('foo' in 'foobar') as p,
+ substring('foo' from 2 for 3) as s,
+ substring('foo' similar 'f' escape '#') as ss,
+ substring('foo' from 'oo') as ssf, -- historically-permitted abuse
+ trim(' ' from ' foo ') as bt,
+ trim(leading ' ' from ' foo ') as lt,
+ trim(trailing ' foo ') as rt;
+select pg_get_viewdef('tt201v', true);
+ pg_get_viewdef
+-----------------------------------------------------------------------------------------------
+ SELECT date_part('day'::text, now()) AS extr, +
+ ((now(), '@ 1 day'::interval) OVERLAPS (CURRENT_TIMESTAMP(2), '@ 1 day'::interval)) AS o,+
+ (('foo'::text) IS NORMALIZED) AS isn, +
+ (('foo'::text) IS NFKC NORMALIZED) AS isnn, +
+ NORMALIZE('foo'::text) AS n, +
+ NORMALIZE('foo'::text, NFKD) AS nfkd, +
+ OVERLAY('foo'::text PLACING 'bar'::text FROM 2) AS ovl, +
+ OVERLAY('foo'::text PLACING 'bar'::text FROM 2 FOR 3) AS ovl2, +
+ POSITION(('foo'::text) IN ('foobar'::text)) AS p, +
+ SUBSTRING('foo'::text FROM 2 FOR 3) AS s, +
+ SUBSTRING('foo'::text SIMILAR 'f'::text ESCAPE '#'::text) AS ss, +
+ "substring"('foo'::text, 'oo'::text) AS ssf, +
+ TRIM(BOTH ' '::text FROM ' foo '::text) AS bt, +
+ TRIM(LEADING ' '::text FROM ' foo '::text) AS lt, +
+ TRIM(TRAILING FROM ' foo '::text) AS rt;
+(1 row)
+
-- corner cases with empty join conditions
create view tt21v as
select * from tt5 natural inner join tt6;
@@ -1904,7 +1943,7 @@ drop cascades to view aliased_view_2
drop cascades to view aliased_view_3
drop cascades to view aliased_view_4
DROP SCHEMA testviewschm2 CASCADE;
-NOTICE: drop cascades to 67 other objects
+NOTICE: drop cascades to 68 other objects
DETAIL: drop cascades to table t1
drop cascades to view temporal1
drop cascades to view temporal2
@@ -1966,6 +2005,7 @@ drop cascades to view tt17v
drop cascades to view tt18v
drop cascades to view tt19v
drop cascades to view tt20v
+drop cascades to view tt201v
drop cascades to view tt21v
drop cascades to view tt22v
drop cascades to view tt23v
diff --git a/src/test/regress/expected/timestamptz.out b/src/test/regress/expected/timestamptz.out
index 639b50308e..c300965554 100644
--- a/src/test/regress/expected/timestamptz.out
+++ b/src/test/regress/expected/timestamptz.out
@@ -2726,10 +2726,10 @@ create temp table tmptz (f1 timestamptz primary key);
insert into tmptz values ('2017-01-18 00:00+00');
explain (costs off)
select * from tmptz where f1 at time zone 'utc' = '2017-01-18 00:00';
- QUERY PLAN
--------------------------------------------------------------------------------------------------
+ QUERY PLAN
+-----------------------------------------------------------------------------------------------------
Seq Scan on tmptz
- Filter: (timezone('utc'::text, f1) = 'Wed Jan 18 00:00:00 2017'::timestamp without time zone)
+ Filter: ((f1 AT TIME ZONE 'utc'::text) = 'Wed Jan 18 00:00:00 2017'::timestamp without time zone)
(2 rows)
select * from tmptz where f1 at time zone 'utc' = '2017-01-18 00:00';
diff --git a/src/test/regress/sql/create_view.sql b/src/test/regress/sql/create_view.sql
index e7af0bf2fa..6d4dd53965 100644
--- a/src/test/regress/sql/create_view.sql
+++ b/src/test/regress/sql/create_view.sql
@@ -586,6 +586,28 @@ select * from
cast(1+2 as int8) as i8;
select pg_get_viewdef('tt20v', true);
+-- reverse-listing of various special function syntaxes required by SQL
+
+create view tt201v as
+select
+ extract(day from now()) as extr,
+ (now(), '1 day'::interval) overlaps
+ (current_timestamp(2), '1 day'::interval) as o,
+ 'foo' is normalized isn,
+ 'foo' is nfkc normalized isnn,
+ normalize('foo') as n,
+ normalize('foo', nfkd) as nfkd,
+ overlay('foo' placing 'bar' from 2) as ovl,
+ overlay('foo' placing 'bar' from 2 for 3) as ovl2,
+ position('foo' in 'foobar') as p,
+ substring('foo' from 2 for 3) as s,
+ substring('foo' similar 'f' escape '#') as ss,
+ substring('foo' from 'oo') as ssf, -- historically-permitted abuse
+ trim(' ' from ' foo ') as bt,
+ trim(leading ' ' from ' foo ') as lt,
+ trim(trailing ' foo ') as rt;
+select pg_get_viewdef('tt201v', true);
+
-- corner cases with empty join conditions
create view tt21v as
Here is a new patch for this. This now follows the implementation that
Tom has suggested: Leave date_part() alone, add a new set of extract()
functions, and map the SQL EXTRACT construct to those. I have basically
just copied over the implementations from my previous patch and placed
them next to the existing date_part() implementations. So all the
behavior is still the same as in the previous patches.
One thing I still need to look into is how to not lose all the test
coverage for date_part(). But that should be fairly mechanical, so I'm
leaving it off in this version.
--
Peter Eisentraut
2ndQuadrant, an EDB company
https://www.2ndquadrant.com/
Attachments:
v4-0001-Change-return-type-of-EXTRACT-to-numeric.patchtext/plain; charset=UTF-8; name=v4-0001-Change-return-type-of-EXTRACT-to-numeric.patch; x-mac-creator=0; x-mac-type=0Download
From b40213b0d31b8ac55cda4658c21d06e701b6e273 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter@eisentraut.org>
Date: Tue, 15 Dec 2020 14:27:58 +0100
Subject: [PATCH v4] Change return type of EXTRACT to numeric
The previous implementation of EXTRACT mapped internally to
date_part(), which returned type double precision (since it was
implemented long before the numeric type existed). This can lead to
imprecise output in some cases, so returning numeric would be
preferrable. Changing the return type of an existing function is a
bit risky, so instead we do the following: We implement a new set of
functions, which are now called "extract", in parallel to the existing
date_part functions. They work the same way internally but use
numeric instead of float8. The EXTRACT construct is now mapped by the
parser to these new extract functions. That way, dumps of views
etc. from old versions (which would use date_part) continue to work
unchanged, but new uses will map to the new extract functions.
Additionally, the reverse compilation of EXTRACT now reproduces the
original syntax, using the new mechanism introduced in
40c24bfef92530bd846e111c1742c2a54441c62c.
The following minor changes of behavior result from the new
implementation:
- The column name from an isolated EXTRACT call is now "extract"
instead of "date_part".
- Extract from date now rejects inappropriate field names such as
HOUR. It was previously mapped internally to extract from
timestamp, so it would silently accept everything appropriate for
timestamp.
- Return values when extracting fields with possibly fractional
values, such as second and epoch, now have the full scale that the
value has internally (so, for example, '1.000000' instead of just
'1').
Discussion: https://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu
---
doc/src/sgml/func.sgml | 10 +-
src/backend/parser/gram.y | 2 +-
src/backend/utils/adt/date.c | 407 +++++++++++
src/backend/utils/adt/ruleutils.c | 21 +
src/backend/utils/adt/timestamp.c | 708 +++++++++++++++++++-
src/include/catalog/pg_proc.dat | 18 +
src/test/regress/expected/create_view.out | 2 +-
src/test/regress/expected/date.out | 482 ++++++-------
src/test/regress/expected/interval.out | 72 +-
src/test/regress/expected/psql_crosstab.out | 12 +-
src/test/regress/expected/time.out | 24 +-
src/test/regress/expected/timetz.out | 42 +-
src/test/regress/sql/date.sql | 6 +-
13 files changed, 1445 insertions(+), 361 deletions(-)
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index df29af6371..a42993c4a5 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -8705,7 +8705,7 @@ <title>Date/Time Functions</title>
<primary>extract</primary>
</indexterm>
<function>extract</function> ( <parameter>field</parameter> <literal>from</literal> <type>timestamp</type> )
- <returnvalue>double precision</returnvalue>
+ <returnvalue>numeric</returnvalue>
</para>
<para>
Get timestamp subfield; see <xref linkend="functions-datetime-extract"/>
@@ -8719,7 +8719,7 @@ <title>Date/Time Functions</title>
<row>
<entry role="func_table_entry"><para role="func_signature">
<function>extract</function> ( <parameter>field</parameter> <literal>from</literal> <type>interval</type> )
- <returnvalue>double precision</returnvalue>
+ <returnvalue>numeric</returnvalue>
</para>
<para>
Get interval subfield; see <xref linkend="functions-datetime-extract"/>
@@ -9234,7 +9234,7 @@ <title><function>EXTRACT</function>, <function>date_part</function></title>
well.) <replaceable>field</replaceable> is an identifier or
string that selects what field to extract from the source value.
The <function>extract</function> function returns values of type
- <type>double precision</type>.
+ <type>numeric</type>.
The following are valid field names:
<!-- alphabetical -->
@@ -9645,6 +9645,10 @@ <title><function>EXTRACT</function>, <function>date_part</function></title>
be a string value, not a name. The valid field names for
<function>date_part</function> are the same as for
<function>extract</function>.
+ For historical reasons, the <function>date_part</function> function
+ returns values of type <type>double precision</type>. This can result in
+ a loss of precision in certain uses. Using <function>extract</function>
+ is recommended instead.
</para>
<screen>
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 8f341ac006..2e8924aff8 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -13768,7 +13768,7 @@ func_expr_common_subexpr:
{ $$ = makeTypeCast($3, $5, @1); }
| EXTRACT '(' extract_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("date_part"),
+ $$ = (Node *) makeFuncCall(SystemFuncName("extract"),
$3,
COERCE_SQL_SYNTAX,
@1);
diff --git a/src/backend/utils/adt/date.c b/src/backend/utils/adt/date.c
index a470cf890a..61fd69829c 100644
--- a/src/backend/utils/adt/date.c
+++ b/src/backend/utils/adt/date.c
@@ -31,6 +31,7 @@
#include "utils/builtins.h"
#include "utils/date.h"
#include "utils/datetime.h"
+#include "utils/numeric.h"
#include "utils/sortsupport.h"
/*
@@ -1063,6 +1064,180 @@ in_range_date_interval(PG_FUNCTION_ARGS)
}
+/* extract_date()
+ * Extract specified field from date type.
+ */
+Datum
+extract_date(PG_FUNCTION_ARGS)
+{
+ text *units = PG_GETARG_TEXT_PP(0);
+ DateADT date = PG_GETARG_DATEADT(1);
+ int64 result;
+ int type,
+ val;
+ char *lowunits;
+ int year, mon, mday;
+
+ lowunits = downcase_truncate_identifier(VARDATA_ANY(units),
+ VARSIZE_ANY_EXHDR(units),
+ false);
+
+ type = DecodeUnits(0, lowunits, &val);
+ if (type == UNKNOWN_FIELD)
+ type = DecodeSpecial(0, lowunits, &val);
+
+ if (DATE_NOT_FINITE(date) && (type == UNITS || type == RESERV))
+ {
+ switch (val)
+ {
+ /* Oscillating units */
+ case DTK_DAY:
+ case DTK_MONTH:
+ case DTK_QUARTER:
+ case DTK_WEEK:
+ case DTK_DOW:
+ case DTK_ISODOW:
+ case DTK_DOY:
+ PG_RETURN_NULL();
+ break;
+
+ /* Monotonically-increasing units */
+ case DTK_YEAR:
+ case DTK_DECADE:
+ case DTK_CENTURY:
+ case DTK_MILLENNIUM:
+ case DTK_JULIAN:
+ case DTK_ISOYEAR:
+ case DTK_EPOCH:
+ if (DATE_IS_NOBEGIN(date))
+ PG_RETURN_NUMERIC(DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("-Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1))));
+ else
+ PG_RETURN_NUMERIC(DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1))));
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("date units \"%s\" not supported",
+ lowunits)));
+ }
+ }
+ else if (type == UNITS)
+ {
+ j2date(date + POSTGRES_EPOCH_JDATE, &year, &mon, &mday);
+
+ switch (val)
+ {
+ case DTK_DAY:
+ result = mday;
+ break;
+
+ case DTK_MONTH:
+ result = mon;
+ break;
+
+ case DTK_QUARTER:
+ result = (mon - 1) / 3 + 1;
+ break;
+
+ case DTK_WEEK:
+ result = date2isoweek(year, mon, mday);
+ break;
+
+ case DTK_YEAR:
+ if (year > 0)
+ result = year;
+ else
+ /* there is no year 0, just 1 BC and 1 AD */
+ result = year - 1;
+ break;
+
+ case DTK_DECADE:
+ /* see comments in timestamp_part */
+ if (year >= 0)
+ result = year / 10;
+ else
+ result = -((8 - (year - 1)) / 10);
+ break;
+
+ case DTK_CENTURY:
+ /* see comments in timestamp_part */
+ if (year > 0)
+ result = (year + 99) / 100;
+ else
+ result = -((99 - (year - 1)) / 100);
+ break;
+
+ case DTK_MILLENNIUM:
+ /* see comments in timestamp_part */
+ if (year > 0)
+ result = (year + 999) / 1000;
+ else
+ result = -((999 - (year - 1)) / 1000);
+ break;
+
+ case DTK_JULIAN:
+ result = date + POSTGRES_EPOCH_JDATE;
+ break;
+
+ case DTK_ISOYEAR:
+ result = date2isoyear(year, mon, mday);
+ /* Adjust BC years */
+ if (result <= 0)
+ result -= 1;
+ break;
+
+ case DTK_DOW:
+ case DTK_ISODOW:
+ result = j2day(date + POSTGRES_EPOCH_JDATE);
+ if (val == DTK_ISODOW && result == 0)
+ result = 7;
+ break;
+
+ case DTK_DOY:
+ result = date2j(year, mon, mday) - date2j(year, 1, 1) + 1;
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("date units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+ }
+ else if (type == RESERV)
+ {
+ switch (val)
+ {
+ case DTK_EPOCH:
+ result = ((int64) date + POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * SECS_PER_DAY;
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("date units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+ }
+ else
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("date units \"%s\" not recognized", lowunits)));
+ result = 0;
+ }
+
+ PG_RETURN_NUMERIC(int64_to_numeric(result));
+}
+
+
/* Add an interval to a date, giving a new date.
* Must handle both positive and negative intervals.
*
@@ -2036,6 +2211,114 @@ time_part(PG_FUNCTION_ARGS)
}
+/* extract_time()
+ * Extract specified field from time type.
+ */
+Datum
+extract_time(PG_FUNCTION_ARGS)
+{
+ text *units = PG_GETARG_TEXT_PP(0);
+ TimeADT time = PG_GETARG_TIMEADT(1);
+ Numeric result;
+ int type,
+ val;
+ char *lowunits;
+
+ lowunits = downcase_truncate_identifier(VARDATA_ANY(units),
+ VARSIZE_ANY_EXHDR(units),
+ false);
+
+ type = DecodeUnits(0, lowunits, &val);
+ if (type == UNKNOWN_FIELD)
+ type = DecodeSpecial(0, lowunits, &val);
+
+ if (type == UNITS)
+ {
+ fsec_t fsec;
+ struct pg_tm tt,
+ *tm = &tt;
+
+ time2tm(time, tm, &fsec);
+
+ switch (val)
+ {
+ case DTK_MICROSEC:
+ result = int64_to_numeric(tm->tm_sec * 1000000 + fsec);
+ break;
+
+ case DTK_MILLISEC:
+ /* tm->tm_sec * 1000 + fsec / 1000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec * 1000),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(3)));
+ break;
+
+ case DTK_SECOND:
+ /* tm->tm_sec + fsec / 1000000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
+ break;
+
+ case DTK_MINUTE:
+ result = int64_to_numeric(tm->tm_min);
+ break;
+
+ case DTK_HOUR:
+ result = int64_to_numeric(tm->tm_hour);
+ break;
+
+ case DTK_TZ:
+ case DTK_TZ_MINUTE:
+ case DTK_TZ_HOUR:
+ case DTK_DAY:
+ case DTK_MONTH:
+ case DTK_QUARTER:
+ case DTK_YEAR:
+ case DTK_DECADE:
+ case DTK_CENTURY:
+ case DTK_MILLENNIUM:
+ case DTK_ISOYEAR:
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("\"time\" units \"%s\" not recognized",
+ lowunits)));
+ result = 0;
+ }
+ }
+ else if (type == RESERV && val == DTK_EPOCH)
+ {
+ result = numeric_div_opt_error(int64_to_numeric(time),
+ int64_to_numeric(1000000),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
+ }
+ else
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("\"time\" units \"%s\" not recognized",
+ lowunits)));
+ result = 0;
+ }
+
+ PG_RETURN_NUMERIC(result);
+}
+
+
/*****************************************************************************
* Time With Time Zone ADT
*****************************************************************************/
@@ -2785,6 +3068,130 @@ timetz_part(PG_FUNCTION_ARGS)
PG_RETURN_FLOAT8(result);
}
+
+/* timetz_part()
+ * Extract specified field from time type.
+ */
+Datum
+extract_timetz(PG_FUNCTION_ARGS)
+{
+ text *units = PG_GETARG_TEXT_PP(0);
+ TimeTzADT *time = PG_GETARG_TIMETZADT_P(1);
+ Numeric result;
+ int type,
+ val;
+ char *lowunits;
+
+ lowunits = downcase_truncate_identifier(VARDATA_ANY(units),
+ VARSIZE_ANY_EXHDR(units),
+ false);
+
+ type = DecodeUnits(0, lowunits, &val);
+ if (type == UNKNOWN_FIELD)
+ type = DecodeSpecial(0, lowunits, &val);
+
+ if (type == UNITS)
+ {
+ int tz;
+ fsec_t fsec;
+ struct pg_tm tt,
+ *tm = &tt;
+
+ timetz2tm(time, tm, &fsec, &tz);
+
+ switch (val)
+ {
+ case DTK_TZ:
+ result = int64_to_numeric(-tz);
+ break;
+
+ case DTK_TZ_MINUTE:
+ /* trunc(-tz / 60) % 60 */
+ result = numeric_mod_opt_error(int64_to_numeric(-tz / SECS_PER_MINUTE),
+ int64_to_numeric(SECS_PER_MINUTE),
+ NULL);
+ break;
+
+ case DTK_TZ_HOUR:
+ result = int64_to_numeric(-tz / SECS_PER_HOUR);
+ break;
+
+ case DTK_MICROSEC:
+ result = int64_to_numeric(tm->tm_sec * 1000000 + fsec);
+ break;
+
+ case DTK_MILLISEC:
+ /* tm->tm_sec * 1000 + fsec / 1000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec * 1000),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(3)));
+ break;
+
+ case DTK_SECOND:
+ /* tm->tm_sec + fsec / 1000000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
+ break;
+
+ case DTK_MINUTE:
+ result = int64_to_numeric(tm->tm_min);
+ break;
+
+ case DTK_HOUR:
+ result = int64_to_numeric(tm->tm_hour);
+ break;
+
+ case DTK_DAY:
+ case DTK_MONTH:
+ case DTK_QUARTER:
+ case DTK_YEAR:
+ case DTK_DECADE:
+ case DTK_CENTURY:
+ case DTK_MILLENNIUM:
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("\"time with time zone\" units \"%s\" not recognized",
+ lowunits)));
+ result = 0;
+ }
+ }
+ else if (type == RESERV && val == DTK_EPOCH)
+ {
+ /* time->time / 1000000.0 + time->zone */
+ result = numeric_add_opt_error(numeric_div_opt_error(int64_to_numeric(time->time),
+ int64_to_numeric(1000000),
+ NULL),
+ int64_to_numeric(time->zone),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
+ }
+ else
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("\"time with time zone\" units \"%s\" not recognized",
+ lowunits)));
+ result = 0;
+ }
+
+ PG_RETURN_NUMERIC(result);
+}
+
+
/* timetz_zone()
* Encode time with time zone type with specified time zone.
* Applies DST rules as of the current date.
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index ad582f99a5..d312dfa756 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -9556,6 +9556,27 @@ get_func_sql_syntax(FuncExpr *expr, deparse_context *context)
appendStringInfoString(buf, "))");
return true;
+ case F_EXTRACT_TEXT_DATE:
+ case F_EXTRACT_TEXT_TIME:
+ case F_EXTRACT_TEXT_TIMETZ:
+ case F_EXTRACT_TEXT_TIMESTAMP:
+ case F_EXTRACT_TEXT_TIMESTAMPTZ:
+ case F_EXTRACT_TEXT_INTERVAL:
+ /* EXTRACT (x FROM y) */
+ appendStringInfoString(buf, "EXTRACT(");
+ {
+ Const *con = (Const *) linitial(expr->args);
+
+ Assert(IsA(con, Const) &&
+ con->consttype == TEXTOID &&
+ !con->constisnull);
+ appendStringInfoString(buf, TextDatumGetCString(con->constvalue));
+ }
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
case F_IS_NORMALIZED:
/* IS xxx NORMALIZED */
appendStringInfoString(buf, "((");
diff --git a/src/backend/utils/adt/timestamp.c b/src/backend/utils/adt/timestamp.c
index 2dbd309122..4c643033ad 100644
--- a/src/backend/utils/adt/timestamp.c
+++ b/src/backend/utils/adt/timestamp.c
@@ -35,6 +35,7 @@
#include "utils/date.h"
#include "utils/datetime.h"
#include "utils/float.h"
+#include "utils/numeric.h"
/*
* gcc's -ffast-math switch breaks routines that expect exact results from
@@ -4447,7 +4448,7 @@ date2isoyearday(int year, int mon, int mday)
}
/*
- * NonFiniteTimestampTzPart
+ * NonFiniteTimestampTzPart_float8
*
* Used by timestamp_part and timestamptz_part when extracting from infinite
* timestamp[tz]. Returns +/-Infinity if that is the appropriate result,
@@ -4458,8 +4459,8 @@ date2isoyearday(int year, int mon, int mday)
* discrepancies between finite- and infinite-input cases.
*/
static float8
-NonFiniteTimestampTzPart(int type, int unit, char *lowunits,
- bool isNegative, bool isTz)
+NonFiniteTimestampTzPart_float8(int type, int unit, char *lowunits,
+ bool isNegative, bool isTz)
{
if ((type != UNITS) && (type != RESERV))
{
@@ -4523,6 +4524,89 @@ NonFiniteTimestampTzPart(int type, int unit, char *lowunits,
}
}
+/*
+ * NonFiniteTimestampTzPart_numeric
+ *
+ * Used by extract_timestamp and extract_timestamptz when extracting from infinite
+ * timestamp[tz]. Returns +/-Infinity if that is the appropriate result,
+ * otherwise returns NULL (which should be taken as meaning to return SQL NULL).
+ *
+ * Errors thrown here for invalid units should exactly match those that
+ * would be thrown in the calling functions, else there will be unexpected
+ * discrepancies between finite- and infinite-input cases.
+ */
+static Numeric
+NonFiniteTimestampTzPart_numeric(int type, int unit, char *lowunits,
+ bool isNegative, bool isTz)
+{
+ if ((type != UNITS) && (type != RESERV))
+ {
+ if (isTz)
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("timestamp with time zone units \"%s\" not recognized",
+ lowunits)));
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("timestamp units \"%s\" not recognized",
+ lowunits)));
+ }
+
+ switch (unit)
+ {
+ /* Oscillating units */
+ case DTK_MICROSEC:
+ case DTK_MILLISEC:
+ case DTK_SECOND:
+ case DTK_MINUTE:
+ case DTK_HOUR:
+ case DTK_DAY:
+ case DTK_MONTH:
+ case DTK_QUARTER:
+ case DTK_WEEK:
+ case DTK_DOW:
+ case DTK_ISODOW:
+ case DTK_DOY:
+ case DTK_TZ:
+ case DTK_TZ_MINUTE:
+ case DTK_TZ_HOUR:
+ return NULL;
+
+ /* Monotonically-increasing units */
+ case DTK_YEAR:
+ case DTK_DECADE:
+ case DTK_CENTURY:
+ case DTK_MILLENNIUM:
+ case DTK_JULIAN:
+ case DTK_ISOYEAR:
+ case DTK_EPOCH:
+ if (isNegative)
+ return DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("-Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1)));
+ else
+ return DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1)));
+
+ default:
+ if (isTz)
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("timestamp with time zone units \"%s\" not supported",
+ lowunits)));
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("timestamp units \"%s\" not supported",
+ lowunits)));
+ return NULL; /* keep compiler quiet */
+ }
+}
+
/* timestamp_part()
* Extract specified field from timestamp.
*/
@@ -4550,9 +4634,9 @@ timestamp_part(PG_FUNCTION_ARGS)
if (TIMESTAMP_NOT_FINITE(timestamp))
{
- result = NonFiniteTimestampTzPart(type, val, lowunits,
- TIMESTAMP_IS_NOBEGIN(timestamp),
- false);
+ result = NonFiniteTimestampTzPart_float8(type, val, lowunits,
+ TIMESTAMP_IS_NOBEGIN(timestamp),
+ false);
if (result)
PG_RETURN_FLOAT8(result);
else
@@ -4717,6 +4801,237 @@ timestamp_part(PG_FUNCTION_ARGS)
PG_RETURN_FLOAT8(result);
}
+/* extract_timestamp()
+ * Extract specified field from timestamp.
+ */
+Datum
+extract_timestamp(PG_FUNCTION_ARGS)
+{
+ text *units = PG_GETARG_TEXT_PP(0);
+ Timestamp timestamp = PG_GETARG_TIMESTAMP(1);
+ Numeric result;
+ Timestamp epoch;
+ int type,
+ val;
+ char *lowunits;
+ fsec_t fsec;
+ struct pg_tm tt,
+ *tm = &tt;
+
+ lowunits = downcase_truncate_identifier(VARDATA_ANY(units),
+ VARSIZE_ANY_EXHDR(units),
+ false);
+
+ type = DecodeUnits(0, lowunits, &val);
+ if (type == UNKNOWN_FIELD)
+ type = DecodeSpecial(0, lowunits, &val);
+
+ if (TIMESTAMP_NOT_FINITE(timestamp))
+ {
+ result = NonFiniteTimestampTzPart_numeric(type, val, lowunits,
+ TIMESTAMP_IS_NOBEGIN(timestamp),
+ false);
+ if (result)
+ PG_RETURN_NUMERIC(result);
+ else
+ PG_RETURN_NULL();
+ }
+
+ if (type == UNITS)
+ {
+ if (timestamp2tm(timestamp, NULL, tm, &fsec, NULL, NULL) != 0)
+ ereport(ERROR,
+ (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),
+ errmsg("timestamp out of range")));
+
+ switch (val)
+ {
+ case DTK_MICROSEC:
+ result = int64_to_numeric(tm->tm_sec * 1000000 + fsec);
+ break;
+
+ case DTK_MILLISEC:
+ /* tm->tm_sec * 1000 + fsec / 1000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec * 1000),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(3)));
+ break;
+
+ case DTK_SECOND:
+ /* tm->tm_sec + fsec / 1000000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
+ break;
+
+ case DTK_MINUTE:
+ result = int64_to_numeric(tm->tm_min);
+ break;
+
+ case DTK_HOUR:
+ result = int64_to_numeric(tm->tm_hour);
+ break;
+
+ case DTK_DAY:
+ result = int64_to_numeric(tm->tm_mday);
+ break;
+
+ case DTK_MONTH:
+ result = int64_to_numeric(tm->tm_mon);
+ break;
+
+ case DTK_QUARTER:
+ result = int64_to_numeric((tm->tm_mon - 1) / 3 + 1);
+ break;
+
+ case DTK_WEEK:
+ result = int64_to_numeric(date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday));
+ break;
+
+ case DTK_YEAR:
+ if (tm->tm_year > 0)
+ result = int64_to_numeric(tm->tm_year);
+ else
+ /* there is no year 0, just 1 BC and 1 AD */
+ result = int64_to_numeric(tm->tm_year - 1);
+ break;
+
+ case DTK_DECADE:
+
+ /*
+ * what is a decade wrt dates? let us assume that decade 199
+ * is 1990 thru 1999... decade 0 starts on year 1 BC, and -1
+ * is 11 BC thru 2 BC...
+ */
+ if (tm->tm_year >= 0)
+ result = int64_to_numeric(tm->tm_year / 10);
+ else
+ result = int64_to_numeric(-((8 - (tm->tm_year - 1)) / 10));
+ break;
+
+ case DTK_CENTURY:
+
+ /* ----
+ * centuries AD, c>0: year in [ (c-1)* 100 + 1 : c*100 ]
+ * centuries BC, c<0: year in [ c*100 : (c+1) * 100 - 1]
+ * there is no number 0 century.
+ * ----
+ */
+ if (tm->tm_year > 0)
+ result = int64_to_numeric((tm->tm_year + 99) / 100);
+ else
+ /* caution: C division may have negative remainder */
+ result = int64_to_numeric(-((99 - (tm->tm_year - 1)) / 100));
+ break;
+
+ case DTK_MILLENNIUM:
+ /* see comments above. */
+ if (tm->tm_year > 0)
+ result = int64_to_numeric((tm->tm_year + 999) / 1000);
+ else
+ result = int64_to_numeric(-((999 - (tm->tm_year - 1)) / 1000));
+ break;
+
+ case DTK_JULIAN:
+ result = numeric_add_opt_error(
+ int64_to_numeric(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)),
+ numeric_div_opt_error(
+ int64_to_numeric(((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) + tm->tm_sec) * 1000000LL + fsec),
+ int64_to_numeric(SECS_PER_DAY * 1000000LL),
+ NULL),
+ NULL);
+ break;
+
+ case DTK_ISOYEAR:
+ {
+ int tmp = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ /* Adjust BC years */
+ if (tmp <= 0)
+ tmp -= 1;
+ result = int64_to_numeric(tmp);
+ break;
+ }
+
+ case DTK_DOW:
+ case DTK_ISODOW:
+ {
+ int tmp = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
+ if (val == DTK_ISODOW && tmp == 0)
+ tmp = 7;
+ result = int64_to_numeric(tmp);
+ break;
+ }
+
+ case DTK_DOY:
+ result = int64_to_numeric(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
+ - date2j(tm->tm_year, 1, 1) + 1);
+ break;
+
+ case DTK_TZ:
+ case DTK_TZ_MINUTE:
+ case DTK_TZ_HOUR:
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("timestamp units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+ }
+ else if (type == RESERV)
+ {
+ switch (val)
+ {
+ case DTK_EPOCH:
+ /* (timestamp - epoch) / 1000000 */
+ epoch = SetEpochTimestamp();
+ if (timestamp < (PG_INT64_MAX + epoch))
+ result = numeric_div_opt_error(
+ int64_to_numeric(timestamp - epoch),
+ int64_to_numeric(1000000),
+ NULL);
+ else
+ result = numeric_div_opt_error(
+ numeric_sub_opt_error(int64_to_numeric(timestamp),
+ int64_to_numeric(epoch),
+ NULL),
+ int64_to_numeric(1000000),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("timestamp units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+
+ }
+ else
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("timestamp units \"%s\" not recognized", lowunits)));
+ result = 0;
+ }
+
+ PG_RETURN_NUMERIC(result);
+}
+
/* timestamptz_part()
* Extract specified field from timestamp with time zone.
*/
@@ -4746,9 +5061,9 @@ timestamptz_part(PG_FUNCTION_ARGS)
if (TIMESTAMP_NOT_FINITE(timestamp))
{
- result = NonFiniteTimestampTzPart(type, val, lowunits,
- TIMESTAMP_IS_NOBEGIN(timestamp),
- true);
+ result = NonFiniteTimestampTzPart_float8(type, val, lowunits,
+ TIMESTAMP_IS_NOBEGIN(timestamp),
+ true);
if (result)
PG_RETURN_FLOAT8(result);
else
@@ -4915,6 +5230,240 @@ timestamptz_part(PG_FUNCTION_ARGS)
PG_RETURN_FLOAT8(result);
}
+/* extract_timestamptz()
+ * Extract specified field from timestamp with time zone.
+ */
+Datum
+extract_timestamptz(PG_FUNCTION_ARGS)
+{
+ text *units = PG_GETARG_TEXT_PP(0);
+ TimestampTz timestamp = PG_GETARG_TIMESTAMPTZ(1);
+ Numeric result;
+ Timestamp epoch;
+ int tz;
+ int type,
+ val;
+ char *lowunits;
+ fsec_t fsec;
+ struct pg_tm tt,
+ *tm = &tt;
+
+ lowunits = downcase_truncate_identifier(VARDATA_ANY(units),
+ VARSIZE_ANY_EXHDR(units),
+ false);
+
+ type = DecodeUnits(0, lowunits, &val);
+ if (type == UNKNOWN_FIELD)
+ type = DecodeSpecial(0, lowunits, &val);
+
+ if (TIMESTAMP_NOT_FINITE(timestamp))
+ {
+ result = NonFiniteTimestampTzPart_numeric(type, val, lowunits,
+ TIMESTAMP_IS_NOBEGIN(timestamp),
+ true);
+ if (result)
+ PG_RETURN_NUMERIC(result);
+ else
+ PG_RETURN_NULL();
+ }
+
+ if (type == UNITS)
+ {
+ if (timestamp2tm(timestamp, &tz, tm, &fsec, NULL, NULL) != 0)
+ ereport(ERROR,
+ (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),
+ errmsg("timestamp out of range")));
+
+ switch (val)
+ {
+ case DTK_TZ:
+ result = int64_to_numeric(-tz);
+ break;
+
+ case DTK_TZ_MINUTE:
+ /* trunc(-tz / 60) % 60 */
+ result = numeric_mod_opt_error(int64_to_numeric(-tz / SECS_PER_MINUTE),
+ int64_to_numeric(SECS_PER_MINUTE),
+ NULL);
+ break;
+
+ case DTK_TZ_HOUR:
+ result = int64_to_numeric(-tz / SECS_PER_HOUR);
+ break;
+
+ case DTK_MICROSEC:
+ result = int64_to_numeric(tm->tm_sec * 1000000 + fsec);
+ break;
+
+ case DTK_MILLISEC:
+ /* tm->tm_sec * 1000 + fsec / 1000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec * 1000),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(3)));
+ break;
+
+ case DTK_SECOND:
+ /* tm->tm_sec + fsec / 1000000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
+ break;
+
+ case DTK_MINUTE:
+ result = int64_to_numeric(tm->tm_min);
+ break;
+
+ case DTK_HOUR:
+ result = int64_to_numeric(tm->tm_hour);
+ break;
+
+ case DTK_DAY:
+ result = int64_to_numeric(tm->tm_mday);
+ break;
+
+ case DTK_MONTH:
+ result = int64_to_numeric(tm->tm_mon);
+ break;
+
+ case DTK_QUARTER:
+ result = int64_to_numeric((tm->tm_mon - 1) / 3 + 1);
+ break;
+
+ case DTK_WEEK:
+ result = int64_to_numeric(date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday));
+ break;
+
+ case DTK_YEAR:
+ if (tm->tm_year > 0)
+ result = int64_to_numeric(tm->tm_year);
+ else
+ /* there is no year 0, just 1 BC and 1 AD */
+ result = int64_to_numeric(tm->tm_year - 1);
+ break;
+
+ case DTK_DECADE:
+ /* see comments in timestamp_part */
+ if (tm->tm_year >= 0)
+ result = int64_to_numeric(tm->tm_year / 10);
+ else
+ result = int64_to_numeric(-((8 - (tm->tm_year - 1)) / 10));
+ break;
+
+ case DTK_CENTURY:
+ /* see comments in timestamp_part */
+ if (tm->tm_year > 0)
+ result = int64_to_numeric((tm->tm_year + 99) / 100);
+ else
+ result = int64_to_numeric(-((99 - (tm->tm_year - 1)) / 100));
+ break;
+
+ case DTK_MILLENNIUM:
+ /* see comments in timestamp_part */
+ if (tm->tm_year > 0)
+ result = int64_to_numeric((tm->tm_year + 999) / 1000);
+ else
+ result = int64_to_numeric(-((999 - (tm->tm_year - 1)) / 1000));
+ break;
+
+ case DTK_JULIAN:
+ result = numeric_add_opt_error(
+ int64_to_numeric(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)),
+ numeric_div_opt_error(
+ int64_to_numeric(((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) + tm->tm_sec) * 1000000LL + fsec),
+ int64_to_numeric(SECS_PER_DAY * 1000000LL),
+ NULL),
+ NULL);
+ break;
+
+ case DTK_ISOYEAR:
+ {
+ int tmp = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ /* Adjust BC years */
+ if (tmp <= 0)
+ tmp -= 1;
+ result = int64_to_numeric(tmp);
+ break;
+ }
+
+ case DTK_DOW:
+ case DTK_ISODOW:
+ {
+ int tmp = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
+ if (val == DTK_ISODOW && tmp == 0)
+ tmp = 7;
+ result = int64_to_numeric(tmp);
+ break;
+ }
+
+ case DTK_DOY:
+ result = int64_to_numeric(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
+ - date2j(tm->tm_year, 1, 1) + 1);
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("timestamp with time zone units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+
+ }
+ else if (type == RESERV)
+ {
+ switch (val)
+ {
+ case DTK_EPOCH:
+ /* (timestamp - epoch) / 1000000 */
+ epoch = SetEpochTimestamp();
+ if (timestamp < (PG_INT64_MAX + epoch))
+ result = numeric_div_opt_error(
+ int64_to_numeric(timestamp - epoch),
+ int64_to_numeric(1000000),
+ NULL);
+ else
+ result = numeric_div_opt_error(
+ numeric_sub_opt_error(int64_to_numeric(timestamp),
+ int64_to_numeric(epoch),
+ NULL),
+ int64_to_numeric(1000000),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("timestamp with time zone units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+ }
+ else
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("timestamp with time zone units \"%s\" not recognized",
+ lowunits)));
+
+ result = 0;
+ }
+
+ PG_RETURN_NUMERIC(result);
+}
+
/* interval_part()
* Extract specified field from interval.
@@ -5032,6 +5581,147 @@ interval_part(PG_FUNCTION_ARGS)
}
+/* extract_interval()
+ * Extract specified field from interval.
+ */
+Datum
+extract_interval(PG_FUNCTION_ARGS)
+{
+ text *units = PG_GETARG_TEXT_PP(0);
+ Interval *interval = PG_GETARG_INTERVAL_P(1);
+ Numeric result;
+ int type,
+ val;
+ char *lowunits;
+ fsec_t fsec;
+ struct pg_tm tt,
+ *tm = &tt;
+
+ lowunits = downcase_truncate_identifier(VARDATA_ANY(units),
+ VARSIZE_ANY_EXHDR(units),
+ false);
+
+ type = DecodeUnits(0, lowunits, &val);
+ if (type == UNKNOWN_FIELD)
+ type = DecodeSpecial(0, lowunits, &val);
+
+ if (type == UNITS)
+ {
+ if (interval2tm(*interval, tm, &fsec) == 0)
+ {
+ switch (val)
+ {
+ case DTK_MICROSEC:
+ /* tm->tm_sec * 1000000 + fsec */
+ result = int64_to_numeric(tm->tm_sec * 1000000 + fsec);
+ break;
+
+ case DTK_MILLISEC:
+ /* tm->tm_sec * 1000 + fsec / 1000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec * 1000),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(3)));
+ break;
+
+ case DTK_SECOND:
+ /* tm->tm_sec + fsec / 1000000 */
+ result = numeric_add_opt_error(int64_to_numeric(tm->tm_sec),
+ numeric_div_opt_error(int64_to_numeric(fsec),
+ int64_to_numeric(1000000),
+ NULL),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
+ break;
+
+ case DTK_MINUTE:
+ result = int64_to_numeric(tm->tm_min);
+ break;
+
+ case DTK_HOUR:
+ result = int64_to_numeric(tm->tm_hour);
+ break;
+
+ case DTK_DAY:
+ result = int64_to_numeric(tm->tm_mday);
+ break;
+
+ case DTK_MONTH:
+ result = int64_to_numeric(tm->tm_mon);
+ break;
+
+ case DTK_QUARTER:
+ result = int64_to_numeric((tm->tm_mon / 3) + 1);
+ break;
+
+ case DTK_YEAR:
+ result = int64_to_numeric(tm->tm_year);
+ break;
+
+ case DTK_DECADE:
+ /* caution: C division may have negative remainder */
+ result = int64_to_numeric(tm->tm_year / 10);
+ break;
+
+ case DTK_CENTURY:
+ /* caution: C division may have negative remainder */
+ result = int64_to_numeric(tm->tm_year / 100);
+ break;
+
+ case DTK_MILLENNIUM:
+ /* caution: C division may have negative remainder */
+ result = int64_to_numeric(tm->tm_year / 1000);
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("interval units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+
+ }
+ else
+ {
+ elog(ERROR, "could not convert interval to tm");
+ result = 0;
+ }
+ }
+ else if (type == RESERV && val == DTK_EPOCH)
+ {
+ result =
+ numeric_add_opt_error(
+ numeric_div_opt_error(int64_to_numeric(interval->time),
+ int64_to_numeric(1000000),
+ NULL),
+ int64_to_numeric(((int64) DAYS_PER_YEAR * SECS_PER_DAY) * (interval->month / MONTHS_PER_YEAR) +
+ ((int64) DAYS_PER_MONTH * SECS_PER_DAY) * (interval->month % MONTHS_PER_YEAR) +
+ ((int64) SECS_PER_DAY) * interval->day),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
+ }
+ else
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("interval units \"%s\" not recognized",
+ lowunits)));
+ result = 0;
+ }
+
+ PG_RETURN_NUMERIC(result);
+}
+
+
/* timestamp_zone()
* Encode timestamp type with specified time zone.
* This function is just timestamp2timestamptz() except instead of
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index e6c7b070f6..81e6dcc312 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -2333,9 +2333,15 @@
{ oid => '1171', descr => 'extract field from timestamp with time zone',
proname => 'date_part', provolatile => 's', prorettype => 'float8',
proargtypes => 'text timestamptz', prosrc => 'timestamptz_part' },
+{ oid => '9983', descr => 'extract field from timestamp with time zone',
+ proname => 'extract', provolatile => 's', prorettype => 'numeric',
+ proargtypes => 'text timestamptz', prosrc => 'extract_timestamptz' },
{ oid => '1172', descr => 'extract field from interval',
proname => 'date_part', prorettype => 'float8',
proargtypes => 'text interval', prosrc => 'interval_part' },
+{ oid => '9984', descr => 'extract field from interval',
+ proname => 'extract', prorettype => 'numeric',
+ proargtypes => 'text interval', prosrc => 'extract_interval' },
{ oid => '1174', descr => 'convert date to timestamp with time zone',
proname => 'timestamptz', provolatile => 's', prorettype => 'timestamptz',
proargtypes => 'date', prosrc => 'date_timestamptz' },
@@ -2483,6 +2489,9 @@
{ oid => '1273', descr => 'extract field from time with time zone',
proname => 'date_part', prorettype => 'float8', proargtypes => 'text timetz',
prosrc => 'timetz_part' },
+{ oid => '9981', descr => 'extract field from time with time zone',
+ proname => 'extract', prorettype => 'numeric', proargtypes => 'text timetz',
+ prosrc => 'extract_timetz' },
{ oid => '1274',
proname => 'int84pl', prorettype => 'int8', proargtypes => 'int8 int4',
prosrc => 'int84pl' },
@@ -2828,9 +2837,15 @@
proname => 'date_part', prolang => 'sql', prorettype => 'float8',
proargtypes => 'text date',
prosrc => 'select pg_catalog.date_part($1, cast($2 as timestamp without time zone))' },
+{ oid => '9979', descr => 'extract field from date',
+ proname => 'extract', prorettype => 'numeric', proargtypes => 'text date',
+ prosrc => 'extract_date' },
{ oid => '1385', descr => 'extract field from time',
proname => 'date_part', prorettype => 'float8', proargtypes => 'text time',
prosrc => 'time_part' },
+{ oid => '9980', descr => 'extract field from time',
+ proname => 'extract', prorettype => 'numeric', proargtypes => 'text time',
+ prosrc => 'extract_time' },
{ oid => '1386',
descr => 'date difference from today preserving months and years',
proname => 'age', prolang => 'sql', provolatile => 's',
@@ -5758,6 +5773,9 @@
{ oid => '2021', descr => 'extract field from timestamp',
proname => 'date_part', prorettype => 'float8',
proargtypes => 'text timestamp', prosrc => 'timestamp_part' },
+{ oid => '9982', descr => 'extract field from timestamp',
+ proname => 'extract', prorettype => 'numeric',
+ proargtypes => 'text timestamp', prosrc => 'extract_timestamp' },
{ oid => '2024', descr => 'convert date to timestamp',
proname => 'timestamp', prorettype => 'timestamp', proargtypes => 'date',
prosrc => 'date_timestamp' },
diff --git a/src/test/regress/expected/create_view.out b/src/test/regress/expected/create_view.out
index b234d2d4f9..eb11af99a7 100644
--- a/src/test/regress/expected/create_view.out
+++ b/src/test/regress/expected/create_view.out
@@ -1739,7 +1739,7 @@ select
select pg_get_viewdef('tt201v', true);
pg_get_viewdef
-----------------------------------------------------------------------------------------------
- SELECT date_part('day'::text, now()) AS extr, +
+ SELECT EXTRACT(day FROM now()) AS extr, +
((now(), '@ 1 day'::interval) OVERLAPS (CURRENT_TIMESTAMP(2), '@ 1 day'::interval)) AS o,+
(('foo'::text) IS NORMALIZED) AS isn, +
(('foo'::text) IS NFKC NORMALIZED) AS isnn, +
diff --git a/src/test/regress/expected/date.out b/src/test/regress/expected/date.out
index 1b921ce215..d7677fc0b5 100644
--- a/src/test/regress/expected/date.out
+++ b/src/test/regress/expected/date.out
@@ -966,78 +966,78 @@ SELECT f1 as "date",
-- epoch
--
SELECT EXTRACT(EPOCH FROM DATE '1970-01-01'); -- 0
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(EPOCH FROM TIMESTAMP '1970-01-01'); -- 0
- date_part
------------
- 0
+ extract
+----------
+ 0.000000
(1 row)
SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00'); -- 0
- date_part
------------
- 0
+ extract
+----------
+ 0.000000
(1 row)
--
-- century
--
SELECT EXTRACT(CENTURY FROM DATE '0101-12-31 BC'); -- -2
- date_part
------------
- -2
+ extract
+---------
+ -2
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '0100-12-31 BC'); -- -1
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '0001-12-31 BC'); -- -1
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '0001-01-01'); -- 1
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '0001-01-01 AD'); -- 1
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '1900-12-31'); -- 19
- date_part
------------
- 19
+ extract
+---------
+ 19
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '1901-01-01'); -- 20
- date_part
------------
- 20
+ extract
+---------
+ 20
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '2000-12-31'); -- 20
- date_part
------------
- 20
+ extract
+---------
+ 20
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '2001-01-01'); -- 21
- date_part
------------
- 21
+ extract
+---------
+ 21
(1 row)
SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True; -- true
@@ -1050,91 +1050,91 @@ SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True; -- true
-- millennium
--
SELECT EXTRACT(MILLENNIUM FROM DATE '0001-12-31 BC'); -- -1
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '0001-01-01 AD'); -- 1
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '1000-12-31'); -- 1
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '1001-01-01'); -- 2
- date_part
------------
- 2
+ extract
+---------
+ 2
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '2000-12-31'); -- 2
- date_part
------------
- 2
+ extract
+---------
+ 2
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '2001-01-01'); -- 3
- date_part
------------
- 3
+ extract
+---------
+ 3
(1 row)
-- next test to be fixed on the turn of the next millennium;-)
SELECT EXTRACT(MILLENNIUM FROM CURRENT_DATE); -- 3
- date_part
------------
- 3
+ extract
+---------
+ 3
(1 row)
--
-- decade
--
SELECT EXTRACT(DECADE FROM DATE '1994-12-25'); -- 199
- date_part
------------
- 199
+ extract
+---------
+ 199
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0010-01-01'); -- 1
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0009-12-31'); -- 0
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0001-01-01 BC'); -- 0
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0002-12-31 BC'); -- -1
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0011-01-01 BC'); -- -1
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0012-12-31 BC'); -- -2
- date_part
------------
- -2
+ extract
+---------
+ -2
(1 row)
--
@@ -1148,135 +1148,112 @@ SELECT EXTRACT(CENTURY FROM NOW())>=21 AS True; -- true
(1 row)
SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
- date_part
------------
- 20
+ extract
+---------
+ 20
(1 row)
--
-- all possible fields
--
SELECT EXTRACT(MICROSECONDS FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "microseconds" not supported
SELECT EXTRACT(MILLISECONDS FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "milliseconds" not supported
SELECT EXTRACT(SECOND FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "second" not supported
SELECT EXTRACT(MINUTE FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "minute" not supported
SELECT EXTRACT(HOUR FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "hour" not supported
SELECT EXTRACT(DAY FROM DATE '2020-08-11');
- date_part
------------
- 11
+ extract
+---------
+ 11
(1 row)
SELECT EXTRACT(MONTH FROM DATE '2020-08-11');
- date_part
------------
- 8
+ extract
+---------
+ 8
(1 row)
SELECT EXTRACT(YEAR FROM DATE '2020-08-11');
- date_part
------------
- 2020
+ extract
+---------
+ 2020
(1 row)
SELECT EXTRACT(DECADE FROM DATE '2020-08-11');
- date_part
------------
- 202
+ extract
+---------
+ 202
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '2020-08-11');
- date_part
------------
- 21
+ extract
+---------
+ 21
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '2020-08-11');
- date_part
------------
- 3
+ extract
+---------
+ 3
(1 row)
SELECT EXTRACT(ISOYEAR FROM DATE '2020-08-11');
- date_part
------------
- 2020
+ extract
+---------
+ 2020
(1 row)
SELECT EXTRACT(QUARTER FROM DATE '2020-08-11');
- date_part
------------
- 3
+ extract
+---------
+ 3
(1 row)
SELECT EXTRACT(WEEK FROM DATE '2020-08-11');
- date_part
------------
- 33
+ extract
+---------
+ 33
(1 row)
SELECT EXTRACT(DOW FROM DATE '2020-08-11');
- date_part
------------
- 2
+ extract
+---------
+ 2
(1 row)
SELECT EXTRACT(ISODOW FROM DATE '2020-08-11');
- date_part
------------
- 2
+ extract
+---------
+ 2
(1 row)
SELECT EXTRACT(DOY FROM DATE '2020-08-11');
- date_part
------------
- 224
+ extract
+---------
+ 224
(1 row)
SELECT EXTRACT(TIMEZONE FROM DATE '2020-08-11');
-ERROR: timestamp units "timezone" not supported
-CONTEXT: SQL function "date_part" statement 1
+ERROR: date units "timezone" not supported
SELECT EXTRACT(TIMEZONE_M FROM DATE '2020-08-11');
-ERROR: timestamp units "timezone_m" not supported
-CONTEXT: SQL function "date_part" statement 1
+ERROR: date units "timezone_m" not supported
SELECT EXTRACT(TIMEZONE_H FROM DATE '2020-08-11');
-ERROR: timestamp units "timezone_h" not supported
-CONTEXT: SQL function "date_part" statement 1
+ERROR: date units "timezone_h" not supported
SELECT EXTRACT(EPOCH FROM DATE '2020-08-11');
- date_part
+ extract
------------
1597104000
(1 row)
SELECT EXTRACT(JULIAN FROM DATE '2020-08-11');
- date_part
------------
- 2459073
+ extract
+---------
+ 2459073
(1 row)
--
@@ -1372,221 +1349,188 @@ select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'
--
-- oscillating fields from non-finite date/timestamptz:
--
-SELECT EXTRACT(HOUR FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+SELECT EXTRACT(DAY FROM DATE 'infinity'); -- NULL
+ extract
+---------
+
(1 row)
-SELECT EXTRACT(HOUR FROM DATE '-infinity'); -- NULL
- date_part
------------
-
+SELECT EXTRACT(DAY FROM DATE '-infinity'); -- NULL
+ extract
+---------
+
(1 row)
SELECT EXTRACT(HOUR FROM TIMESTAMP 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(HOUR FROM TIMESTAMP '-infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(HOUR FROM TIMESTAMPTZ '-infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
-- all possible fields
SELECT EXTRACT(MICROSECONDS FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "microseconds" not supported
SELECT EXTRACT(MILLISECONDS FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "milliseconds" not supported
SELECT EXTRACT(SECOND FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "second" not supported
SELECT EXTRACT(MINUTE FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "minute" not supported
SELECT EXTRACT(HOUR FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "hour" not supported
SELECT EXTRACT(DAY FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(MONTH FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(QUARTER FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(WEEK FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(DOW FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(ISODOW FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(DOY FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(TIMEZONE FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "timezone" not supported
SELECT EXTRACT(TIMEZONE_M FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "timezone_m" not supported
SELECT EXTRACT(TIMEZONE_H FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "timezone_h" not supported
--
-- monotonic fields from non-finite date/timestamptz:
--
SELECT EXTRACT(EPOCH FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(EPOCH FROM DATE '-infinity'); -- -Infinity
- date_part
+ extract
-----------
-Infinity
(1 row)
SELECT EXTRACT(EPOCH FROM TIMESTAMP 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(EPOCH FROM TIMESTAMP '-infinity'); -- -Infinity
- date_part
+ extract
-----------
-Infinity
(1 row)
SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '-infinity'); -- -Infinity
- date_part
+ extract
-----------
-Infinity
(1 row)
-- all possible fields
SELECT EXTRACT(YEAR FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(DECADE FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(CENTURY FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(JULIAN FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(ISOYEAR FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(EPOCH FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
--
-- wrong fields from non-finite date:
--
-SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- ERROR: timestamp units "microsec" not recognized
-ERROR: timestamp units "microsec" not recognized
-CONTEXT: SQL function "date_part" statement 1
+SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- error
+ERROR: date units "microsec" not recognized
-- test constructors
select make_date(2013, 7, 15);
make_date
diff --git a/src/test/regress/expected/interval.out b/src/test/regress/expected/interval.out
index fde4be5271..5f864df194 100644
--- a/src/test/regress/expected/interval.out
+++ b/src/test/regress/expected/interval.out
@@ -948,18 +948,18 @@ SELECT f1,
EXTRACT(MILLENNIUM FROM f1) AS MILLENNIUM,
EXTRACT(EPOCH FROM f1) AS EPOCH
FROM INTERVAL_TBL;
- f1 | microsecond | millisecond | second | minute | hour | day | month | quarter | year | decade | century | millennium | epoch
--------------------------------+-------------+-------------+--------+--------+------+-----+-------+---------+------+--------+---------+------------+------------
- @ 1 min | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 60
- @ 5 hours | 0 | 0 | 0 | 0 | 5 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 18000
- @ 10 days | 0 | 0 | 0 | 0 | 0 | 10 | 0 | 1 | 0 | 0 | 0 | 0 | 864000
- @ 34 years | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 34 | 3 | 0 | 0 | 1072958400
- @ 3 mons | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 2 | 0 | 0 | 0 | 0 | 7776000
- @ 14 secs ago | -14000000 | -14000 | -14 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | -14
- @ 1 day 2 hours 3 mins 4 secs | 4000000 | 4000 | 4 | 3 | 2 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 93784
- @ 6 years | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 6 | 0 | 0 | 0 | 189345600
- @ 5 mons | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 12960000
- @ 5 mons 12 hours | 0 | 0 | 0 | 0 | 12 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 13003200
+ f1 | microsecond | millisecond | second | minute | hour | day | month | quarter | year | decade | century | millennium | epoch
+-------------------------------+-------------+-------------+------------+--------+------+-----+-------+---------+------+--------+---------+------------+-------------------
+ @ 1 min | 0 | 0.000 | 0.000000 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 60.000000
+ @ 5 hours | 0 | 0.000 | 0.000000 | 0 | 5 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 18000.000000
+ @ 10 days | 0 | 0.000 | 0.000000 | 0 | 0 | 10 | 0 | 1 | 0 | 0 | 0 | 0 | 864000.000000
+ @ 34 years | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 0 | 1 | 34 | 3 | 0 | 0 | 1072224000.000000
+ @ 3 mons | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 3 | 2 | 0 | 0 | 0 | 0 | 7776000.000000
+ @ 14 secs ago | -14000000 | -14000.000 | -14.000000 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | -14.000000
+ @ 1 day 2 hours 3 mins 4 secs | 4000000 | 4000.000 | 4.000000 | 3 | 2 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 93784.000000
+ @ 6 years | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 0 | 1 | 6 | 0 | 0 | 0 | 189216000.000000
+ @ 5 mons | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 12960000.000000
+ @ 5 mons 12 hours | 0 | 0.000 | 0.000000 | 0 | 12 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 13003200.000000
(10 rows)
SELECT EXTRACT(FORTNIGHT FROM INTERVAL '2 days'); -- error
@@ -967,50 +967,50 @@ ERROR: interval units "fortnight" not recognized
SELECT EXTRACT(TIMEZONE FROM INTERVAL '2 days'); -- error
ERROR: interval units "timezone" not supported
SELECT EXTRACT(DECADE FROM INTERVAL '100 y');
- date_part
------------
- 10
+ extract
+---------
+ 10
(1 row)
SELECT EXTRACT(DECADE FROM INTERVAL '99 y');
- date_part
------------
- 9
+ extract
+---------
+ 9
(1 row)
SELECT EXTRACT(DECADE FROM INTERVAL '-99 y');
- date_part
------------
- -9
+ extract
+---------
+ -9
(1 row)
SELECT EXTRACT(DECADE FROM INTERVAL '-100 y');
- date_part
------------
- -10
+ extract
+---------
+ -10
(1 row)
SELECT EXTRACT(CENTURY FROM INTERVAL '100 y');
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y');
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
diff --git a/src/test/regress/expected/psql_crosstab.out b/src/test/regress/expected/psql_crosstab.out
index eae6fbd051..e09e331016 100644
--- a/src/test/regress/expected/psql_crosstab.out
+++ b/src/test/regress/expected/psql_crosstab.out
@@ -17,12 +17,12 @@ SELECT v, EXTRACT(year FROM d), count(*)
FROM ctv_data
GROUP BY 1, 2
ORDER BY 1, 2;
- v | date_part | count
-----+-----------+-------
- v0 | 2014 | 2
- v0 | 2015 | 1
- v1 | 2015 | 3
- v2 | 2015 | 1
+ v | extract | count
+----+---------+-------
+ v0 | 2014 | 2
+ v0 | 2015 | 1
+ v1 | 2015 | 3
+ v2 | 2015 | 1
(4 rows)
-- basic usage with 3 columns
diff --git a/src/test/regress/expected/time.out b/src/test/regress/expected/time.out
index 5303cc0c94..f9786e0b4e 100644
--- a/src/test/regress/expected/time.out
+++ b/src/test/regress/expected/time.out
@@ -131,33 +131,33 @@ HINT: Could not choose a best candidate operator. You might need to add explici
-- test EXTRACT
--
SELECT EXTRACT(MICROSECOND FROM TIME '2020-05-26 13:30:25.575401');
- date_part
------------
- 25575401
+ extract
+----------
+ 25575401
(1 row)
SELECT EXTRACT(MILLISECOND FROM TIME '2020-05-26 13:30:25.575401');
- date_part
+ extract
-----------
25575.401
(1 row)
SELECT EXTRACT(SECOND FROM TIME '2020-05-26 13:30:25.575401');
- date_part
+ extract
-----------
25.575401
(1 row)
SELECT EXTRACT(MINUTE FROM TIME '2020-05-26 13:30:25.575401');
- date_part
------------
- 30
+ extract
+---------
+ 30
(1 row)
SELECT EXTRACT(HOUR FROM TIME '2020-05-26 13:30:25.575401');
- date_part
------------
- 13
+ extract
+---------
+ 13
(1 row)
SELECT EXTRACT(DAY FROM TIME '2020-05-26 13:30:25.575401'); -- error
@@ -167,7 +167,7 @@ ERROR: "time" units "fortnight" not recognized
SELECT EXTRACT(TIMEZONE FROM TIME '2020-05-26 13:30:25.575401'); -- error
ERROR: "time" units "timezone" not recognized
SELECT EXTRACT(EPOCH FROM TIME '2020-05-26 13:30:25.575401');
- date_part
+ extract
--------------
48625.575401
(1 row)
diff --git a/src/test/regress/expected/timetz.out b/src/test/regress/expected/timetz.out
index 1ab5ed5105..f4a0a760f7 100644
--- a/src/test/regress/expected/timetz.out
+++ b/src/test/regress/expected/timetz.out
@@ -148,33 +148,33 @@ HINT: No operator matches the given name and argument types. You might need to
-- test EXTRACT
--
SELECT EXTRACT(MICROSECOND FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
------------
- 25575401
+ extract
+----------
+ 25575401
(1 row)
SELECT EXTRACT(MILLISECOND FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
+ extract
-----------
25575.401
(1 row)
SELECT EXTRACT(SECOND FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
+ extract
-----------
25.575401
(1 row)
SELECT EXTRACT(MINUTE FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
------------
- 30
+ extract
+---------
+ 30
(1 row)
SELECT EXTRACT(HOUR FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
------------
- 13
+ extract
+---------
+ 13
(1 row)
SELECT EXTRACT(DAY FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04'); -- error
@@ -182,25 +182,25 @@ ERROR: "time with time zone" units "day" not recognized
SELECT EXTRACT(FORTNIGHT FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04'); -- error
ERROR: "time with time zone" units "fortnight" not recognized
SELECT EXTRACT(TIMEZONE FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
------------
- -14400
+ extract
+---------
+ -14400
(1 row)
SELECT EXTRACT(TIMEZONE_HOUR FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
------------
- -4
+ extract
+---------
+ -4
(1 row)
SELECT EXTRACT(TIMEZONE_MINUTE FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(EPOCH FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
+ extract
--------------
63025.575401
(1 row)
diff --git a/src/test/regress/sql/date.sql b/src/test/regress/sql/date.sql
index 7a734fb1a0..eb0a9d8ddd 100644
--- a/src/test/regress/sql/date.sql
+++ b/src/test/regress/sql/date.sql
@@ -329,8 +329,8 @@ CREATE TABLE DATE_TBL (f1 date);
--
-- oscillating fields from non-finite date/timestamptz:
--
-SELECT EXTRACT(HOUR FROM DATE 'infinity'); -- NULL
-SELECT EXTRACT(HOUR FROM DATE '-infinity'); -- NULL
+SELECT EXTRACT(DAY FROM DATE 'infinity'); -- NULL
+SELECT EXTRACT(DAY FROM DATE '-infinity'); -- NULL
SELECT EXTRACT(HOUR FROM TIMESTAMP 'infinity'); -- NULL
SELECT EXTRACT(HOUR FROM TIMESTAMP '-infinity'); -- NULL
SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity'); -- NULL
@@ -371,7 +371,7 @@ CREATE TABLE DATE_TBL (f1 date);
--
-- wrong fields from non-finite date:
--
-SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- ERROR: timestamp units "microsec" not recognized
+SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- error
-- test constructors
select make_date(2013, 7, 15);
base-commit: a58db3aa10e62e4228aa409ba006014fa07a8ca2
--
2.29.2
On 12/15/20 9:03 AM, Peter Eisentraut wrote:
Here is a new patch for this.� This now follows the implementation that
Tom has suggested:� Leave date_part() alone, add a new set of extract()
functions, and map the SQL EXTRACT construct to those.� I have basically
just copied over the implementations from my previous patch and placed
them next to the existing date_part() implementations.� So all the
behavior is still the same as in the previous patches.One thing I still need to look into is how to not lose all the test
coverage for date_part().� But that should be fairly mechanical, so I'm
leaving it off in this version.
Tom, what do you think of the updated patch?
Regards,
--
-David
david@pgmasters.net
David Steele <david@pgmasters.net> writes:
On 12/15/20 9:03 AM, Peter Eisentraut wrote:
Here is a new patch for this. This now follows the implementation that
Tom has suggested: Leave date_part() alone, add a new set of extract()
functions, and map the SQL EXTRACT construct to those. I have basically
just copied over the implementations from my previous patch and placed
them next to the existing date_part() implementations. So all the
behavior is still the same as in the previous patches.One thing I still need to look into is how to not lose all the test
coverage for date_part(). But that should be fairly mechanical, so I'm
leaving it off in this version.
Tom, what do you think of the updated patch?
Oh, I didn't think I was on the hook to review this ;-)
Anyway, taking a quick look at the v4 patch, the only complaint
I have is that it seems a bit bulky and brute-force to duplicate
so much code. Is it feasible to share most of the implementation
between old and new functions, returning (say) an int64 that can
then be converted to either numeric or float8 by a wrapper? That
would also reduce the pressure to duplicate all the test cases.
(I don't intend this complaint as a deal-breaker; Peter may well
have considered this alternative already and rejected it for good
reasons.)
regards, tom lane
On 15.03.21 18:35, Tom Lane wrote:
Anyway, taking a quick look at the v4 patch, the only complaint
I have is that it seems a bit bulky and brute-force to duplicate
so much code. Is it feasible to share most of the implementation
between old and new functions, returning (say) an int64 that can
then be converted to either numeric or float8 by a wrapper? That
would also reduce the pressure to duplicate all the test cases.
Yeah, it's not straightforward to do this, because you'd also need to
carry around scale and infinity information, so you might end up
creating a mini-numeric implementation just for this.
An easy way to reduce duplication would be to convert the existing
date_part() into a wrapper around the new extract(), with a cast. But
then you'd pay the performance penalty of the numeric version.
Which leads me to: After retesting this now, with a new machine, the
performance of the numeric implementation is brutal compared to the
float implementation, for cases where we need numeric division, which is
milliseconds, seconds, and epoch. In the first two cases, I imagine we
could rewrite this a bit to avoid a lot of the numeric work, but for the
epoch case (which is what started this thread), there isn't enough space
in int64 to make this work. Perhaps int128 could be pressed into
service, optionally. I think it would also help if we cracked open the
numeric APIs a bit to avoid all the repeated unpacking and packing for
each step.
So I think we need to do a bit more thinking and work here, meaning it
will have to be postponed.
Here are the kinds of tests I ran:
=> select date_part('epoch', localtime + generate_series(0, 10000000) *
interval '1 second') \g /dev/null
Time: 2537.482 ms (00:02.537)
=> select extract(epoch from localtime + generate_series(0, 10000000) *
interval '1 second') \g /dev/null
Time: 6106.586 ms (00:06.107)
On 18.03.21 09:28, Peter Eisentraut wrote:
Which leads me to: After retesting this now, with a new machine, the
performance of the numeric implementation is brutal compared to the
float implementation, for cases where we need numeric division, which is
milliseconds, seconds, and epoch. In the first two cases, I imagine we
could rewrite this a bit to avoid a lot of the numeric work, but for the
epoch case (which is what started this thread), there isn't enough space
in int64 to make this work. Perhaps int128 could be pressed into
service, optionally. I think it would also help if we cracked open the
numeric APIs a bit to avoid all the repeated unpacking and packing for
each step.So I think we need to do a bit more thinking and work here, meaning it
will have to be postponed.
Well, I had an idea that I put to work. In most of these cases where we
need division, we divide an integer by a power of 10. That can be done
with numeric very quickly by just shifting the weight and scale around.
So I wrote a function that does that specifically (look for
int64_div_fast_to_numeric()). With that, the slow cases I mentioned now
have the same performance as the other cases that didn't have any
numeric division. You just get the overhead for constructing and
passing around a numeric instead of a double, which can't be avoided.
So here is an intermediate patch that does this. I haven't gotten rid
of all numeric_div_opt_error() calls yet, but if this seems acceptable,
I can work on the remaining ones.
Attachments:
v5-0001-Change-return-type-of-EXTRACT-to-numeric.patchtext/plain; charset=UTF-8; name=v5-0001-Change-return-type-of-EXTRACT-to-numeric.patch; x-mac-creator=0; x-mac-type=0Download
From 801460c46ddf7273d2c49a9af3460f2f5614599d Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter@eisentraut.org>
Date: Fri, 19 Mar 2021 20:27:15 +0100
Subject: [PATCH v5] Change return type of EXTRACT to numeric
The previous implementation of EXTRACT mapped internally to
date_part(), which returned type double precision (since it was
implemented long before the numeric type existed). This can lead to
imprecise output in some cases, so returning numeric would be
preferrable. Changing the return type of an existing function is a
bit risky, so instead we do the following: We implement a new set of
functions, which are now called "extract", in parallel to the existing
date_part functions. They work the same way internally but use
numeric instead of float8. The EXTRACT construct is now mapped by the
parser to these new extract functions. That way, dumps of views
etc. from old versions (which would use date_part) continue to work
unchanged, but new uses will map to the new extract functions.
Additionally, the reverse compilation of EXTRACT now reproduces the
original syntax, using the new mechanism introduced in
40c24bfef92530bd846e111c1742c2a54441c62c.
The following minor changes of behavior result from the new
implementation:
- The column name from an isolated EXTRACT call is now "extract"
instead of "date_part".
- Extract from date now rejects inappropriate field names such as
HOUR. It was previously mapped internally to extract from
timestamp, so it would silently accept everything appropriate for
timestamp.
- Return values when extracting fields with possibly fractional
values, such as second and epoch, now have the full scale that the
value has internally (so, for example, '1.000000' instead of just
'1').
Discussion: https://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu
---
doc/src/sgml/func.sgml | 10 +-
src/backend/parser/gram.y | 2 +-
src/backend/utils/adt/date.c | 370 +++++++++++
src/backend/utils/adt/numeric.c | 56 ++
src/backend/utils/adt/ruleutils.c | 21 +
src/backend/utils/adt/timestamp.c | 670 +++++++++++++++++++-
src/include/catalog/pg_proc.dat | 18 +
src/include/utils/numeric.h | 1 +
src/test/regress/expected/create_view.out | 2 +-
src/test/regress/expected/date.out | 482 +++++++-------
src/test/regress/expected/interval.out | 72 +--
src/test/regress/expected/psql_crosstab.out | 12 +-
src/test/regress/expected/time.out | 24 +-
src/test/regress/expected/timetz.out | 46 +-
src/test/regress/sql/date.sql | 6 +-
src/test/regress/sql/timetz.sql | 4 +-
16 files changed, 1431 insertions(+), 365 deletions(-)
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 9492a3c6b9..0383cf3db9 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -8776,7 +8776,7 @@ <title>Date/Time Functions</title>
<primary>extract</primary>
</indexterm>
<function>extract</function> ( <parameter>field</parameter> <literal>from</literal> <type>timestamp</type> )
- <returnvalue>double precision</returnvalue>
+ <returnvalue>numeric</returnvalue>
</para>
<para>
Get timestamp subfield; see <xref linkend="functions-datetime-extract"/>
@@ -8790,7 +8790,7 @@ <title>Date/Time Functions</title>
<row>
<entry role="func_table_entry"><para role="func_signature">
<function>extract</function> ( <parameter>field</parameter> <literal>from</literal> <type>interval</type> )
- <returnvalue>double precision</returnvalue>
+ <returnvalue>numeric</returnvalue>
</para>
<para>
Get interval subfield; see <xref linkend="functions-datetime-extract"/>
@@ -9305,7 +9305,7 @@ <title><function>EXTRACT</function>, <function>date_part</function></title>
well.) <replaceable>field</replaceable> is an identifier or
string that selects what field to extract from the source value.
The <function>extract</function> function returns values of type
- <type>double precision</type>.
+ <type>numeric</type>.
The following are valid field names:
<!-- alphabetical -->
@@ -9729,6 +9729,10 @@ <title><function>EXTRACT</function>, <function>date_part</function></title>
be a string value, not a name. The valid field names for
<function>date_part</function> are the same as for
<function>extract</function>.
+ For historical reasons, the <function>date_part</function> function
+ returns values of type <type>double precision</type>. This can result in
+ a loss of precision in certain uses. Using <function>extract</function>
+ is recommended instead.
</para>
<screen>
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index fd07e7107d..4a23593613 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -13910,7 +13910,7 @@ func_expr_common_subexpr:
{ $$ = makeTypeCast($3, $5, @1); }
| EXTRACT '(' extract_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("date_part"),
+ $$ = (Node *) makeFuncCall(SystemFuncName("extract"),
$3,
COERCE_SQL_SYNTAX,
@1);
diff --git a/src/backend/utils/adt/date.c b/src/backend/utils/adt/date.c
index 68d99a5099..9da03b0ef6 100644
--- a/src/backend/utils/adt/date.c
+++ b/src/backend/utils/adt/date.c
@@ -31,6 +31,7 @@
#include "utils/builtins.h"
#include "utils/date.h"
#include "utils/datetime.h"
+#include "utils/numeric.h"
#include "utils/sortsupport.h"
/*
@@ -1063,6 +1064,180 @@ in_range_date_interval(PG_FUNCTION_ARGS)
}
+/* extract_date()
+ * Extract specified field from date type.
+ */
+Datum
+extract_date(PG_FUNCTION_ARGS)
+{
+ text *units = PG_GETARG_TEXT_PP(0);
+ DateADT date = PG_GETARG_DATEADT(1);
+ int64 result;
+ int type,
+ val;
+ char *lowunits;
+ int year, mon, mday;
+
+ lowunits = downcase_truncate_identifier(VARDATA_ANY(units),
+ VARSIZE_ANY_EXHDR(units),
+ false);
+
+ type = DecodeUnits(0, lowunits, &val);
+ if (type == UNKNOWN_FIELD)
+ type = DecodeSpecial(0, lowunits, &val);
+
+ if (DATE_NOT_FINITE(date) && (type == UNITS || type == RESERV))
+ {
+ switch (val)
+ {
+ /* Oscillating units */
+ case DTK_DAY:
+ case DTK_MONTH:
+ case DTK_QUARTER:
+ case DTK_WEEK:
+ case DTK_DOW:
+ case DTK_ISODOW:
+ case DTK_DOY:
+ PG_RETURN_NULL();
+ break;
+
+ /* Monotonically-increasing units */
+ case DTK_YEAR:
+ case DTK_DECADE:
+ case DTK_CENTURY:
+ case DTK_MILLENNIUM:
+ case DTK_JULIAN:
+ case DTK_ISOYEAR:
+ case DTK_EPOCH:
+ if (DATE_IS_NOBEGIN(date))
+ PG_RETURN_NUMERIC(DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("-Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1))));
+ else
+ PG_RETURN_NUMERIC(DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1))));
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("date units \"%s\" not supported",
+ lowunits)));
+ }
+ }
+ else if (type == UNITS)
+ {
+ j2date(date + POSTGRES_EPOCH_JDATE, &year, &mon, &mday);
+
+ switch (val)
+ {
+ case DTK_DAY:
+ result = mday;
+ break;
+
+ case DTK_MONTH:
+ result = mon;
+ break;
+
+ case DTK_QUARTER:
+ result = (mon - 1) / 3 + 1;
+ break;
+
+ case DTK_WEEK:
+ result = date2isoweek(year, mon, mday);
+ break;
+
+ case DTK_YEAR:
+ if (year > 0)
+ result = year;
+ else
+ /* there is no year 0, just 1 BC and 1 AD */
+ result = year - 1;
+ break;
+
+ case DTK_DECADE:
+ /* see comments in timestamp_part */
+ if (year >= 0)
+ result = year / 10;
+ else
+ result = -((8 - (year - 1)) / 10);
+ break;
+
+ case DTK_CENTURY:
+ /* see comments in timestamp_part */
+ if (year > 0)
+ result = (year + 99) / 100;
+ else
+ result = -((99 - (year - 1)) / 100);
+ break;
+
+ case DTK_MILLENNIUM:
+ /* see comments in timestamp_part */
+ if (year > 0)
+ result = (year + 999) / 1000;
+ else
+ result = -((999 - (year - 1)) / 1000);
+ break;
+
+ case DTK_JULIAN:
+ result = date + POSTGRES_EPOCH_JDATE;
+ break;
+
+ case DTK_ISOYEAR:
+ result = date2isoyear(year, mon, mday);
+ /* Adjust BC years */
+ if (result <= 0)
+ result -= 1;
+ break;
+
+ case DTK_DOW:
+ case DTK_ISODOW:
+ result = j2day(date + POSTGRES_EPOCH_JDATE);
+ if (val == DTK_ISODOW && result == 0)
+ result = 7;
+ break;
+
+ case DTK_DOY:
+ result = date2j(year, mon, mday) - date2j(year, 1, 1) + 1;
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("date units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+ }
+ else if (type == RESERV)
+ {
+ switch (val)
+ {
+ case DTK_EPOCH:
+ result = ((int64) date + POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * SECS_PER_DAY;
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("date units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+ }
+ else
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("date units \"%s\" not recognized", lowunits)));
+ result = 0;
+ }
+
+ PG_RETURN_NUMERIC(int64_to_numeric(result));
+}
+
+
/* Add an interval to a date, giving a new date.
* Must handle both positive and negative intervals.
*
@@ -2036,6 +2211,97 @@ time_part(PG_FUNCTION_ARGS)
}
+/* extract_time()
+ * Extract specified field from time type.
+ */
+Datum
+extract_time(PG_FUNCTION_ARGS)
+{
+ text *units = PG_GETARG_TEXT_PP(0);
+ TimeADT time = PG_GETARG_TIMEADT(1);
+ Numeric result;
+ int type,
+ val;
+ char *lowunits;
+
+ lowunits = downcase_truncate_identifier(VARDATA_ANY(units),
+ VARSIZE_ANY_EXHDR(units),
+ false);
+
+ type = DecodeUnits(0, lowunits, &val);
+ if (type == UNKNOWN_FIELD)
+ type = DecodeSpecial(0, lowunits, &val);
+
+ if (type == UNITS)
+ {
+ fsec_t fsec;
+ struct pg_tm tt,
+ *tm = &tt;
+
+ time2tm(time, tm, &fsec);
+
+ switch (val)
+ {
+ case DTK_MICROSEC:
+ result = int64_to_numeric(tm->tm_sec * 1000000 + fsec);
+ break;
+
+ case DTK_MILLISEC:
+ /* tm->tm_sec * 1000 + fsec / 1000
+ = (tm->tm_sec * 1'000'000 + fsec) / 1000 */
+ result = int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 1000);
+ break;
+
+ case DTK_SECOND:
+ /* tm->tm_sec + fsec / 1'000'000
+ = (tm->tm_sec * 1'000'000 + fsec) / 1'000'000 */
+ result = int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 1000000);
+ break;
+
+ case DTK_MINUTE:
+ result = int64_to_numeric(tm->tm_min);
+ break;
+
+ case DTK_HOUR:
+ result = int64_to_numeric(tm->tm_hour);
+ break;
+
+ case DTK_TZ:
+ case DTK_TZ_MINUTE:
+ case DTK_TZ_HOUR:
+ case DTK_DAY:
+ case DTK_MONTH:
+ case DTK_QUARTER:
+ case DTK_YEAR:
+ case DTK_DECADE:
+ case DTK_CENTURY:
+ case DTK_MILLENNIUM:
+ case DTK_ISOYEAR:
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("\"time\" units \"%s\" not recognized",
+ lowunits)));
+ result = 0;
+ }
+ }
+ else if (type == RESERV && val == DTK_EPOCH)
+ {
+ result = int64_div_fast_to_numeric(time, 1000000);
+ }
+ else
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("\"time\" units \"%s\" not recognized",
+ lowunits)));
+ result = 0;
+ }
+
+ PG_RETURN_NUMERIC(result);
+}
+
+
/*****************************************************************************
* Time With Time Zone ADT
*****************************************************************************/
@@ -2785,6 +3051,110 @@ timetz_part(PG_FUNCTION_ARGS)
PG_RETURN_FLOAT8(result);
}
+
+/* timetz_part()
+ * Extract specified field from time type.
+ */
+Datum
+extract_timetz(PG_FUNCTION_ARGS)
+{
+ text *units = PG_GETARG_TEXT_PP(0);
+ TimeTzADT *time = PG_GETARG_TIMETZADT_P(1);
+ Numeric result;
+ int type,
+ val;
+ char *lowunits;
+
+ lowunits = downcase_truncate_identifier(VARDATA_ANY(units),
+ VARSIZE_ANY_EXHDR(units),
+ false);
+
+ type = DecodeUnits(0, lowunits, &val);
+ if (type == UNKNOWN_FIELD)
+ type = DecodeSpecial(0, lowunits, &val);
+
+ if (type == UNITS)
+ {
+ int tz;
+ fsec_t fsec;
+ struct pg_tm tt,
+ *tm = &tt;
+
+ timetz2tm(time, tm, &fsec, &tz);
+
+ switch (val)
+ {
+ case DTK_TZ:
+ result = int64_to_numeric(-tz);
+ break;
+
+ case DTK_TZ_MINUTE:
+ /* trunc(-tz / 60) % 60 */
+ result = int64_to_numeric(- (tz - tz / SECS_PER_HOUR * SECS_PER_HOUR) / SECS_PER_MINUTE);
+ break;
+
+ case DTK_TZ_HOUR:
+ result = int64_to_numeric(- tz / SECS_PER_HOUR);
+ break;
+
+ case DTK_MICROSEC:
+ result = int64_to_numeric(tm->tm_sec * 1000000 + fsec);
+ break;
+
+ case DTK_MILLISEC:
+ /* tm->tm_sec * 1000 + fsec / 1000
+ = (tm->tm_sec * 1'000'000 + fsec) / 1000 */
+ result = int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 1000);
+ break;
+
+ case DTK_SECOND:
+ /* tm->tm_sec + fsec / 1'000'000
+ = (tm->tm_sec * 1'000'000 + fsec) / 1'000'000 */
+ result = int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 1000000);
+ break;
+
+ case DTK_MINUTE:
+ result = int64_to_numeric(tm->tm_min);
+ break;
+
+ case DTK_HOUR:
+ result = int64_to_numeric(tm->tm_hour);
+ break;
+
+ case DTK_DAY:
+ case DTK_MONTH:
+ case DTK_QUARTER:
+ case DTK_YEAR:
+ case DTK_DECADE:
+ case DTK_CENTURY:
+ case DTK_MILLENNIUM:
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("\"time with time zone\" units \"%s\" not recognized",
+ lowunits)));
+ result = 0;
+ }
+ }
+ else if (type == RESERV && val == DTK_EPOCH)
+ {
+ /* time->time / 1000000.0 + time->zone
+ = (time->time + time->zone * 1000000) / 1000000.0 */
+ result = int64_div_fast_to_numeric(time->time + time->zone * 1000000LL, 1000000);
+ }
+ else
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("\"time with time zone\" units \"%s\" not recognized",
+ lowunits)));
+ result = 0;
+ }
+
+ PG_RETURN_NUMERIC(result);
+}
+
+
/* timetz_zone()
* Encode time with time zone type with specified time zone.
* Applies DST rules as of the current date.
diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c
index 682200f636..055b02efaf 100644
--- a/src/backend/utils/adt/numeric.c
+++ b/src/backend/utils/adt/numeric.c
@@ -4092,6 +4092,62 @@ int64_to_numeric(int64 val)
return res;
}
+/*
+ * Convert val1/val2 to numeric. val2 must be a power of 10. This is much
+ * faster than normal numeric division.
+ */
+Numeric
+int64_div_fast_to_numeric(int64 val1, int64 val2)
+{
+ Numeric res;
+ NumericVar result;
+ int64 x;
+ int n;
+ int s;
+
+ x = val2;
+ n = 0;
+ s = 0;
+
+ /* count how much to decrease the weight by */
+ while (x >= NBASE)
+ {
+ x /= NBASE;
+ n++;
+ }
+
+ /*
+ * If there is anything left, multiply the dividend by what's left, then
+ * shift the weight by one more. Also remember by how much we multiplied
+ * so we can adjust the scale below.
+ */
+ if (x > 1)
+ {
+ if (unlikely(pg_mul_s64_overflow(val1, NBASE/x, &val1)))
+ elog(ERROR, "overflow");
+ n++;
+ while (x > 1)
+ {
+ Assert((x / 10) * 10 == x); /* must be power of 10 */
+ x /= 10;
+ s++;
+ }
+ }
+
+ init_var(&result);
+
+ int64_to_numericvar(val1, &result);
+
+ result.weight -= n;
+ result.dscale += n * DEC_DIGITS - (DEC_DIGITS - s);
+
+ res = make_result(&result);
+
+ free_var(&result);
+
+ return res;
+}
+
Datum
int4_numeric(PG_FUNCTION_ARGS)
{
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index f0de2a25c9..8a6e265fc9 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -9617,6 +9617,27 @@ get_func_sql_syntax(FuncExpr *expr, deparse_context *context)
appendStringInfoString(buf, "))");
return true;
+ case F_EXTRACT_TEXT_DATE:
+ case F_EXTRACT_TEXT_TIME:
+ case F_EXTRACT_TEXT_TIMETZ:
+ case F_EXTRACT_TEXT_TIMESTAMP:
+ case F_EXTRACT_TEXT_TIMESTAMPTZ:
+ case F_EXTRACT_TEXT_INTERVAL:
+ /* EXTRACT (x FROM y) */
+ appendStringInfoString(buf, "EXTRACT(");
+ {
+ Const *con = (Const *) linitial(expr->args);
+
+ Assert(IsA(con, Const) &&
+ con->consttype == TEXTOID &&
+ !con->constisnull);
+ appendStringInfoString(buf, TextDatumGetCString(con->constvalue));
+ }
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
case F_IS_NORMALIZED:
/* IS xxx NORMALIZED */
appendStringInfoString(buf, "((");
diff --git a/src/backend/utils/adt/timestamp.c b/src/backend/utils/adt/timestamp.c
index 0b1f95a5b4..d73b6353f9 100644
--- a/src/backend/utils/adt/timestamp.c
+++ b/src/backend/utils/adt/timestamp.c
@@ -35,6 +35,7 @@
#include "utils/date.h"
#include "utils/datetime.h"
#include "utils/float.h"
+#include "utils/numeric.h"
/*
* gcc's -ffast-math switch breaks routines that expect exact results from
@@ -4447,7 +4448,7 @@ date2isoyearday(int year, int mon, int mday)
}
/*
- * NonFiniteTimestampTzPart
+ * NonFiniteTimestampTzPart_float8
*
* Used by timestamp_part and timestamptz_part when extracting from infinite
* timestamp[tz]. Returns +/-Infinity if that is the appropriate result,
@@ -4458,8 +4459,8 @@ date2isoyearday(int year, int mon, int mday)
* discrepancies between finite- and infinite-input cases.
*/
static float8
-NonFiniteTimestampTzPart(int type, int unit, char *lowunits,
- bool isNegative, bool isTz)
+NonFiniteTimestampTzPart_float8(int type, int unit, char *lowunits,
+ bool isNegative, bool isTz)
{
if ((type != UNITS) && (type != RESERV))
{
@@ -4523,6 +4524,89 @@ NonFiniteTimestampTzPart(int type, int unit, char *lowunits,
}
}
+/*
+ * NonFiniteTimestampTzPart_numeric
+ *
+ * Used by extract_timestamp and extract_timestamptz when extracting from infinite
+ * timestamp[tz]. Returns +/-Infinity if that is the appropriate result,
+ * otherwise returns NULL (which should be taken as meaning to return SQL NULL).
+ *
+ * Errors thrown here for invalid units should exactly match those that
+ * would be thrown in the calling functions, else there will be unexpected
+ * discrepancies between finite- and infinite-input cases.
+ */
+static Numeric
+NonFiniteTimestampTzPart_numeric(int type, int unit, char *lowunits,
+ bool isNegative, bool isTz)
+{
+ if ((type != UNITS) && (type != RESERV))
+ {
+ if (isTz)
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("timestamp with time zone units \"%s\" not recognized",
+ lowunits)));
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("timestamp units \"%s\" not recognized",
+ lowunits)));
+ }
+
+ switch (unit)
+ {
+ /* Oscillating units */
+ case DTK_MICROSEC:
+ case DTK_MILLISEC:
+ case DTK_SECOND:
+ case DTK_MINUTE:
+ case DTK_HOUR:
+ case DTK_DAY:
+ case DTK_MONTH:
+ case DTK_QUARTER:
+ case DTK_WEEK:
+ case DTK_DOW:
+ case DTK_ISODOW:
+ case DTK_DOY:
+ case DTK_TZ:
+ case DTK_TZ_MINUTE:
+ case DTK_TZ_HOUR:
+ return NULL;
+
+ /* Monotonically-increasing units */
+ case DTK_YEAR:
+ case DTK_DECADE:
+ case DTK_CENTURY:
+ case DTK_MILLENNIUM:
+ case DTK_JULIAN:
+ case DTK_ISOYEAR:
+ case DTK_EPOCH:
+ if (isNegative)
+ return DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("-Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1)));
+ else
+ return DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1)));
+
+ default:
+ if (isTz)
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("timestamp with time zone units \"%s\" not supported",
+ lowunits)));
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("timestamp units \"%s\" not supported",
+ lowunits)));
+ return NULL; /* keep compiler quiet */
+ }
+}
+
/* timestamp_part()
* Extract specified field from timestamp.
*/
@@ -4550,9 +4634,9 @@ timestamp_part(PG_FUNCTION_ARGS)
if (TIMESTAMP_NOT_FINITE(timestamp))
{
- result = NonFiniteTimestampTzPart(type, val, lowunits,
- TIMESTAMP_IS_NOBEGIN(timestamp),
- false);
+ result = NonFiniteTimestampTzPart_float8(type, val, lowunits,
+ TIMESTAMP_IS_NOBEGIN(timestamp),
+ false);
if (result)
PG_RETURN_FLOAT8(result);
else
@@ -4717,6 +4801,225 @@ timestamp_part(PG_FUNCTION_ARGS)
PG_RETURN_FLOAT8(result);
}
+/* extract_timestamp()
+ * Extract specified field from timestamp.
+ */
+Datum
+extract_timestamp(PG_FUNCTION_ARGS)
+{
+ text *units = PG_GETARG_TEXT_PP(0);
+ Timestamp timestamp = PG_GETARG_TIMESTAMP(1);
+ Numeric result;
+ Timestamp epoch;
+ int type,
+ val;
+ char *lowunits;
+ fsec_t fsec;
+ struct pg_tm tt,
+ *tm = &tt;
+
+ lowunits = downcase_truncate_identifier(VARDATA_ANY(units),
+ VARSIZE_ANY_EXHDR(units),
+ false);
+
+ type = DecodeUnits(0, lowunits, &val);
+ if (type == UNKNOWN_FIELD)
+ type = DecodeSpecial(0, lowunits, &val);
+
+ if (TIMESTAMP_NOT_FINITE(timestamp))
+ {
+ result = NonFiniteTimestampTzPart_numeric(type, val, lowunits,
+ TIMESTAMP_IS_NOBEGIN(timestamp),
+ false);
+ if (result)
+ PG_RETURN_NUMERIC(result);
+ else
+ PG_RETURN_NULL();
+ }
+
+ if (type == UNITS)
+ {
+ if (timestamp2tm(timestamp, NULL, tm, &fsec, NULL, NULL) != 0)
+ ereport(ERROR,
+ (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),
+ errmsg("timestamp out of range")));
+
+ switch (val)
+ {
+ case DTK_MICROSEC:
+ result = int64_to_numeric(tm->tm_sec * 1000000 + fsec);
+ break;
+
+ case DTK_MILLISEC:
+ /* tm->tm_sec * 1000 + fsec / 1000
+ = (tm->tm_sec * 1'000'000 + fsec) / 1000 */
+ result = int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 1000);
+ break;
+
+ case DTK_SECOND:
+ /* tm->tm_sec + fsec / 1'000'000
+ = (tm->tm_sec * 1'000'000 + fsec) / 1'000'000 */
+ result = int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 1000000);
+ break;
+
+ case DTK_MINUTE:
+ result = int64_to_numeric(tm->tm_min);
+ break;
+
+ case DTK_HOUR:
+ result = int64_to_numeric(tm->tm_hour);
+ break;
+
+ case DTK_DAY:
+ result = int64_to_numeric(tm->tm_mday);
+ break;
+
+ case DTK_MONTH:
+ result = int64_to_numeric(tm->tm_mon);
+ break;
+
+ case DTK_QUARTER:
+ result = int64_to_numeric((tm->tm_mon - 1) / 3 + 1);
+ break;
+
+ case DTK_WEEK:
+ result = int64_to_numeric(date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday));
+ break;
+
+ case DTK_YEAR:
+ if (tm->tm_year > 0)
+ result = int64_to_numeric(tm->tm_year);
+ else
+ /* there is no year 0, just 1 BC and 1 AD */
+ result = int64_to_numeric(tm->tm_year - 1);
+ break;
+
+ case DTK_DECADE:
+
+ /*
+ * what is a decade wrt dates? let us assume that decade 199
+ * is 1990 thru 1999... decade 0 starts on year 1 BC, and -1
+ * is 11 BC thru 2 BC...
+ */
+ if (tm->tm_year >= 0)
+ result = int64_to_numeric(tm->tm_year / 10);
+ else
+ result = int64_to_numeric(-((8 - (tm->tm_year - 1)) / 10));
+ break;
+
+ case DTK_CENTURY:
+
+ /* ----
+ * centuries AD, c>0: year in [ (c-1)* 100 + 1 : c*100 ]
+ * centuries BC, c<0: year in [ c*100 : (c+1) * 100 - 1]
+ * there is no number 0 century.
+ * ----
+ */
+ if (tm->tm_year > 0)
+ result = int64_to_numeric((tm->tm_year + 99) / 100);
+ else
+ /* caution: C division may have negative remainder */
+ result = int64_to_numeric(-((99 - (tm->tm_year - 1)) / 100));
+ break;
+
+ case DTK_MILLENNIUM:
+ /* see comments above. */
+ if (tm->tm_year > 0)
+ result = int64_to_numeric((tm->tm_year + 999) / 1000);
+ else
+ result = int64_to_numeric(-((999 - (tm->tm_year - 1)) / 1000));
+ break;
+
+ case DTK_JULIAN:
+ result = numeric_add_opt_error(
+ int64_to_numeric(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)),
+ numeric_div_opt_error(
+ int64_to_numeric(((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) + tm->tm_sec) * 1000000LL + fsec),
+ int64_to_numeric(SECS_PER_DAY * 1000000LL),
+ NULL),
+ NULL);
+ break;
+
+ case DTK_ISOYEAR:
+ {
+ int tmp = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ /* Adjust BC years */
+ if (tmp <= 0)
+ tmp -= 1;
+ result = int64_to_numeric(tmp);
+ break;
+ }
+
+ case DTK_DOW:
+ case DTK_ISODOW:
+ {
+ int tmp = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
+ if (val == DTK_ISODOW && tmp == 0)
+ tmp = 7;
+ result = int64_to_numeric(tmp);
+ break;
+ }
+
+ case DTK_DOY:
+ result = int64_to_numeric(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
+ - date2j(tm->tm_year, 1, 1) + 1);
+ break;
+
+ case DTK_TZ:
+ case DTK_TZ_MINUTE:
+ case DTK_TZ_HOUR:
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("timestamp units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+ }
+ else if (type == RESERV)
+ {
+ switch (val)
+ {
+ case DTK_EPOCH:
+ /* (timestamp - epoch) / 1000000 */
+ epoch = SetEpochTimestamp();
+ if (timestamp < (PG_INT64_MAX + epoch))
+ result = numeric_div_opt_error(
+ int64_to_numeric(timestamp - epoch),
+ int64_to_numeric(1000000),
+ NULL);
+ else
+ result = numeric_div_opt_error(
+ numeric_sub_opt_error(int64_to_numeric(timestamp),
+ int64_to_numeric(epoch),
+ NULL),
+ int64_to_numeric(1000000),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("timestamp units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+
+ }
+ else
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("timestamp units \"%s\" not recognized", lowunits)));
+ result = 0;
+ }
+
+ PG_RETURN_NUMERIC(result);
+}
+
/* timestamptz_part()
* Extract specified field from timestamp with time zone.
*/
@@ -4746,9 +5049,9 @@ timestamptz_part(PG_FUNCTION_ARGS)
if (TIMESTAMP_NOT_FINITE(timestamp))
{
- result = NonFiniteTimestampTzPart(type, val, lowunits,
- TIMESTAMP_IS_NOBEGIN(timestamp),
- true);
+ result = NonFiniteTimestampTzPart_float8(type, val, lowunits,
+ TIMESTAMP_IS_NOBEGIN(timestamp),
+ true);
if (result)
PG_RETURN_FLOAT8(result);
else
@@ -4915,6 +5218,226 @@ timestamptz_part(PG_FUNCTION_ARGS)
PG_RETURN_FLOAT8(result);
}
+/* extract_timestamptz()
+ * Extract specified field from timestamp with time zone.
+ */
+Datum
+extract_timestamptz(PG_FUNCTION_ARGS)
+{
+ text *units = PG_GETARG_TEXT_PP(0);
+ TimestampTz timestamp = PG_GETARG_TIMESTAMPTZ(1);
+ Numeric result;
+ Timestamp epoch;
+ int tz;
+ int type,
+ val;
+ char *lowunits;
+ fsec_t fsec;
+ struct pg_tm tt,
+ *tm = &tt;
+
+ lowunits = downcase_truncate_identifier(VARDATA_ANY(units),
+ VARSIZE_ANY_EXHDR(units),
+ false);
+
+ type = DecodeUnits(0, lowunits, &val);
+ if (type == UNKNOWN_FIELD)
+ type = DecodeSpecial(0, lowunits, &val);
+
+ if (TIMESTAMP_NOT_FINITE(timestamp))
+ {
+ result = NonFiniteTimestampTzPart_numeric(type, val, lowunits,
+ TIMESTAMP_IS_NOBEGIN(timestamp),
+ true);
+ if (result)
+ PG_RETURN_NUMERIC(result);
+ else
+ PG_RETURN_NULL();
+ }
+
+ if (type == UNITS)
+ {
+ if (timestamp2tm(timestamp, &tz, tm, &fsec, NULL, NULL) != 0)
+ ereport(ERROR,
+ (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),
+ errmsg("timestamp out of range")));
+
+ switch (val)
+ {
+ case DTK_TZ:
+ result = int64_to_numeric(-tz);
+ break;
+
+ case DTK_TZ_MINUTE:
+ /* trunc(-tz / 60) % 60 */
+ result = int64_to_numeric(- (tz - tz / SECS_PER_HOUR * SECS_PER_HOUR) / SECS_PER_MINUTE);
+ break;
+
+ case DTK_TZ_HOUR:
+ result = int64_to_numeric(- tz / SECS_PER_HOUR);
+ break;
+
+ case DTK_MICROSEC:
+ result = int64_to_numeric(tm->tm_sec * 1000000 + fsec);
+ break;
+
+ case DTK_MILLISEC:
+ /* tm->tm_sec * 1000 + fsec / 1000
+ = (tm->tm_sec * 1'000'000 + fsec) / 1000 */
+ result = int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 1000);
+ break;
+
+ case DTK_SECOND:
+ /* tm->tm_sec + fsec / 1'000'000
+ = (tm->tm_sec * 1'000'000 + fsec) / 1'000'000 */
+ result = int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 1000000);
+ break;
+
+ case DTK_MINUTE:
+ result = int64_to_numeric(tm->tm_min);
+ break;
+
+ case DTK_HOUR:
+ result = int64_to_numeric(tm->tm_hour);
+ break;
+
+ case DTK_DAY:
+ result = int64_to_numeric(tm->tm_mday);
+ break;
+
+ case DTK_MONTH:
+ result = int64_to_numeric(tm->tm_mon);
+ break;
+
+ case DTK_QUARTER:
+ result = int64_to_numeric((tm->tm_mon - 1) / 3 + 1);
+ break;
+
+ case DTK_WEEK:
+ result = int64_to_numeric(date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday));
+ break;
+
+ case DTK_YEAR:
+ if (tm->tm_year > 0)
+ result = int64_to_numeric(tm->tm_year);
+ else
+ /* there is no year 0, just 1 BC and 1 AD */
+ result = int64_to_numeric(tm->tm_year - 1);
+ break;
+
+ case DTK_DECADE:
+ /* see comments in timestamp_part */
+ if (tm->tm_year >= 0)
+ result = int64_to_numeric(tm->tm_year / 10);
+ else
+ result = int64_to_numeric(-((8 - (tm->tm_year - 1)) / 10));
+ break;
+
+ case DTK_CENTURY:
+ /* see comments in timestamp_part */
+ if (tm->tm_year > 0)
+ result = int64_to_numeric((tm->tm_year + 99) / 100);
+ else
+ result = int64_to_numeric(-((99 - (tm->tm_year - 1)) / 100));
+ break;
+
+ case DTK_MILLENNIUM:
+ /* see comments in timestamp_part */
+ if (tm->tm_year > 0)
+ result = int64_to_numeric((tm->tm_year + 999) / 1000);
+ else
+ result = int64_to_numeric(-((999 - (tm->tm_year - 1)) / 1000));
+ break;
+
+ case DTK_JULIAN:
+ result = numeric_add_opt_error(
+ int64_to_numeric(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)),
+ numeric_div_opt_error(
+ int64_to_numeric(((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) + tm->tm_sec) * 1000000LL + fsec),
+ int64_to_numeric(SECS_PER_DAY * 1000000LL),
+ NULL),
+ NULL);
+ break;
+
+ case DTK_ISOYEAR:
+ {
+ int tmp = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ /* Adjust BC years */
+ if (tmp <= 0)
+ tmp -= 1;
+ result = int64_to_numeric(tmp);
+ break;
+ }
+
+ case DTK_DOW:
+ case DTK_ISODOW:
+ {
+ int tmp = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
+ if (val == DTK_ISODOW && tmp == 0)
+ tmp = 7;
+ result = int64_to_numeric(tmp);
+ break;
+ }
+
+ case DTK_DOY:
+ result = int64_to_numeric(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
+ - date2j(tm->tm_year, 1, 1) + 1);
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("timestamp with time zone units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+
+ }
+ else if (type == RESERV)
+ {
+ switch (val)
+ {
+ case DTK_EPOCH:
+ /* (timestamp - epoch) / 1000000 */
+ epoch = SetEpochTimestamp();
+ if (timestamp < (PG_INT64_MAX + epoch))
+ result = numeric_div_opt_error(
+ int64_to_numeric(timestamp - epoch),
+ int64_to_numeric(1000000),
+ NULL);
+ else
+ result = numeric_div_opt_error(
+ numeric_sub_opt_error(int64_to_numeric(timestamp),
+ int64_to_numeric(epoch),
+ NULL),
+ int64_to_numeric(1000000),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("timestamp with time zone units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+ }
+ else
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("timestamp with time zone units \"%s\" not recognized",
+ lowunits)));
+
+ result = 0;
+ }
+
+ PG_RETURN_NUMERIC(result);
+}
+
/* interval_part()
* Extract specified field from interval.
@@ -5032,6 +5555,135 @@ interval_part(PG_FUNCTION_ARGS)
}
+/* extract_interval()
+ * Extract specified field from interval.
+ */
+Datum
+extract_interval(PG_FUNCTION_ARGS)
+{
+ text *units = PG_GETARG_TEXT_PP(0);
+ Interval *interval = PG_GETARG_INTERVAL_P(1);
+ Numeric result;
+ int type,
+ val;
+ char *lowunits;
+ fsec_t fsec;
+ struct pg_tm tt,
+ *tm = &tt;
+
+ lowunits = downcase_truncate_identifier(VARDATA_ANY(units),
+ VARSIZE_ANY_EXHDR(units),
+ false);
+
+ type = DecodeUnits(0, lowunits, &val);
+ if (type == UNKNOWN_FIELD)
+ type = DecodeSpecial(0, lowunits, &val);
+
+ if (type == UNITS)
+ {
+ if (interval2tm(*interval, tm, &fsec) == 0)
+ {
+ switch (val)
+ {
+ case DTK_MICROSEC:
+ /* tm->tm_sec * 1000000 + fsec */
+ result = int64_to_numeric(tm->tm_sec * 1000000 + fsec);
+ break;
+
+ case DTK_MILLISEC:
+ /* tm->tm_sec * 1000 + fsec / 1000
+ = (tm->tm_sec * 1'000'000 + fsec) / 1000 */
+ result = int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 1000);
+ break;
+
+ case DTK_SECOND:
+ /* tm->tm_sec + fsec / 1'000'000
+ = (tm->tm_sec * 1'000'000 + fsec) / 1'000'000 */
+ result = int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 1000000);
+ break;
+
+ case DTK_MINUTE:
+ result = int64_to_numeric(tm->tm_min);
+ break;
+
+ case DTK_HOUR:
+ result = int64_to_numeric(tm->tm_hour);
+ break;
+
+ case DTK_DAY:
+ result = int64_to_numeric(tm->tm_mday);
+ break;
+
+ case DTK_MONTH:
+ result = int64_to_numeric(tm->tm_mon);
+ break;
+
+ case DTK_QUARTER:
+ result = int64_to_numeric((tm->tm_mon / 3) + 1);
+ break;
+
+ case DTK_YEAR:
+ result = int64_to_numeric(tm->tm_year);
+ break;
+
+ case DTK_DECADE:
+ /* caution: C division may have negative remainder */
+ result = int64_to_numeric(tm->tm_year / 10);
+ break;
+
+ case DTK_CENTURY:
+ /* caution: C division may have negative remainder */
+ result = int64_to_numeric(tm->tm_year / 100);
+ break;
+
+ case DTK_MILLENNIUM:
+ /* caution: C division may have negative remainder */
+ result = int64_to_numeric(tm->tm_year / 1000);
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("interval units \"%s\" not supported",
+ lowunits)));
+ result = 0;
+ }
+
+ }
+ else
+ {
+ elog(ERROR, "could not convert interval to tm");
+ result = 0;
+ }
+ }
+ else if (type == RESERV && val == DTK_EPOCH)
+ {
+ result =
+ numeric_add_opt_error(
+ numeric_div_opt_error(int64_to_numeric(interval->time),
+ int64_to_numeric(1000000),
+ NULL),
+ int64_to_numeric(((int64) DAYS_PER_YEAR * SECS_PER_DAY) * (interval->month / MONTHS_PER_YEAR) +
+ ((int64) DAYS_PER_MONTH * SECS_PER_DAY) * (interval->month % MONTHS_PER_YEAR) +
+ ((int64) SECS_PER_DAY) * interval->day),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
+ }
+ else
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("interval units \"%s\" not recognized",
+ lowunits)));
+ result = 0;
+ }
+
+ PG_RETURN_NUMERIC(result);
+}
+
+
/* timestamp_zone()
* Encode timestamp type with specified time zone.
* This function is just timestamp2timestamptz() except instead of
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 93393fcfd4..2087315321 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -2336,9 +2336,15 @@
{ oid => '1171', descr => 'extract field from timestamp with time zone',
proname => 'date_part', provolatile => 's', prorettype => 'float8',
proargtypes => 'text timestamptz', prosrc => 'timestamptz_part' },
+{ oid => '9983', descr => 'extract field from timestamp with time zone',
+ proname => 'extract', provolatile => 's', prorettype => 'numeric',
+ proargtypes => 'text timestamptz', prosrc => 'extract_timestamptz' },
{ oid => '1172', descr => 'extract field from interval',
proname => 'date_part', prorettype => 'float8',
proargtypes => 'text interval', prosrc => 'interval_part' },
+{ oid => '9984', descr => 'extract field from interval',
+ proname => 'extract', prorettype => 'numeric',
+ proargtypes => 'text interval', prosrc => 'extract_interval' },
{ oid => '1174', descr => 'convert date to timestamp with time zone',
proname => 'timestamptz', provolatile => 's', prorettype => 'timestamptz',
proargtypes => 'date', prosrc => 'date_timestamptz' },
@@ -2486,6 +2492,9 @@
{ oid => '1273', descr => 'extract field from time with time zone',
proname => 'date_part', prorettype => 'float8', proargtypes => 'text timetz',
prosrc => 'timetz_part' },
+{ oid => '9981', descr => 'extract field from time with time zone',
+ proname => 'extract', prorettype => 'numeric', proargtypes => 'text timetz',
+ prosrc => 'extract_timetz' },
{ oid => '1274',
proname => 'int84pl', prorettype => 'int8', proargtypes => 'int8 int4',
prosrc => 'int84pl' },
@@ -2831,9 +2840,15 @@
proname => 'date_part', prolang => 'sql', prorettype => 'float8',
proargtypes => 'text date',
prosrc => 'select pg_catalog.date_part($1, cast($2 as timestamp without time zone))' },
+{ oid => '9979', descr => 'extract field from date',
+ proname => 'extract', prorettype => 'numeric', proargtypes => 'text date',
+ prosrc => 'extract_date' },
{ oid => '1385', descr => 'extract field from time',
proname => 'date_part', prorettype => 'float8', proargtypes => 'text time',
prosrc => 'time_part' },
+{ oid => '9980', descr => 'extract field from time',
+ proname => 'extract', prorettype => 'numeric', proargtypes => 'text time',
+ prosrc => 'extract_time' },
{ oid => '1386',
descr => 'date difference from today preserving months and years',
proname => 'age', prolang => 'sql', provolatile => 's',
@@ -5810,6 +5825,9 @@
{ oid => '2021', descr => 'extract field from timestamp',
proname => 'date_part', prorettype => 'float8',
proargtypes => 'text timestamp', prosrc => 'timestamp_part' },
+{ oid => '9982', descr => 'extract field from timestamp',
+ proname => 'extract', prorettype => 'numeric',
+ proargtypes => 'text timestamp', prosrc => 'extract_timestamp' },
{ oid => '2024', descr => 'convert date to timestamp',
proname => 'timestamp', prorettype => 'timestamp', proargtypes => 'date',
prosrc => 'date_timestamp' },
diff --git a/src/include/utils/numeric.h b/src/include/utils/numeric.h
index a362b5beb2..c5ae66469d 100644
--- a/src/include/utils/numeric.h
+++ b/src/include/utils/numeric.h
@@ -63,6 +63,7 @@ extern char *numeric_out_sci(Numeric num, int scale);
extern char *numeric_normalize(Numeric num);
extern Numeric int64_to_numeric(int64 val);
+extern Numeric int64_div_fast_to_numeric(int64 val1, int64 val2);
extern Numeric numeric_add_opt_error(Numeric num1, Numeric num2,
bool *have_error);
diff --git a/src/test/regress/expected/create_view.out b/src/test/regress/expected/create_view.out
index bd5fe60450..cdce4e0727 100644
--- a/src/test/regress/expected/create_view.out
+++ b/src/test/regress/expected/create_view.out
@@ -1742,7 +1742,7 @@ select
select pg_get_viewdef('tt201v', true);
pg_get_viewdef
-----------------------------------------------------------------------------------------------
- SELECT date_part('day'::text, now()) AS extr, +
+ SELECT EXTRACT(day FROM now()) AS extr, +
((now(), '@ 1 day'::interval) OVERLAPS (CURRENT_TIMESTAMP(2), '@ 1 day'::interval)) AS o,+
(('foo'::text) IS NORMALIZED) AS isn, +
(('foo'::text) IS NFKC NORMALIZED) AS isnn, +
diff --git a/src/test/regress/expected/date.out b/src/test/regress/expected/date.out
index 1b921ce215..d7677fc0b5 100644
--- a/src/test/regress/expected/date.out
+++ b/src/test/regress/expected/date.out
@@ -966,78 +966,78 @@ SELECT f1 as "date",
-- epoch
--
SELECT EXTRACT(EPOCH FROM DATE '1970-01-01'); -- 0
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(EPOCH FROM TIMESTAMP '1970-01-01'); -- 0
- date_part
------------
- 0
+ extract
+----------
+ 0.000000
(1 row)
SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00'); -- 0
- date_part
------------
- 0
+ extract
+----------
+ 0.000000
(1 row)
--
-- century
--
SELECT EXTRACT(CENTURY FROM DATE '0101-12-31 BC'); -- -2
- date_part
------------
- -2
+ extract
+---------
+ -2
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '0100-12-31 BC'); -- -1
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '0001-12-31 BC'); -- -1
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '0001-01-01'); -- 1
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '0001-01-01 AD'); -- 1
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '1900-12-31'); -- 19
- date_part
------------
- 19
+ extract
+---------
+ 19
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '1901-01-01'); -- 20
- date_part
------------
- 20
+ extract
+---------
+ 20
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '2000-12-31'); -- 20
- date_part
------------
- 20
+ extract
+---------
+ 20
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '2001-01-01'); -- 21
- date_part
------------
- 21
+ extract
+---------
+ 21
(1 row)
SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True; -- true
@@ -1050,91 +1050,91 @@ SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True; -- true
-- millennium
--
SELECT EXTRACT(MILLENNIUM FROM DATE '0001-12-31 BC'); -- -1
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '0001-01-01 AD'); -- 1
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '1000-12-31'); -- 1
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '1001-01-01'); -- 2
- date_part
------------
- 2
+ extract
+---------
+ 2
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '2000-12-31'); -- 2
- date_part
------------
- 2
+ extract
+---------
+ 2
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '2001-01-01'); -- 3
- date_part
------------
- 3
+ extract
+---------
+ 3
(1 row)
-- next test to be fixed on the turn of the next millennium;-)
SELECT EXTRACT(MILLENNIUM FROM CURRENT_DATE); -- 3
- date_part
------------
- 3
+ extract
+---------
+ 3
(1 row)
--
-- decade
--
SELECT EXTRACT(DECADE FROM DATE '1994-12-25'); -- 199
- date_part
------------
- 199
+ extract
+---------
+ 199
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0010-01-01'); -- 1
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0009-12-31'); -- 0
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0001-01-01 BC'); -- 0
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0002-12-31 BC'); -- -1
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0011-01-01 BC'); -- -1
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0012-12-31 BC'); -- -2
- date_part
------------
- -2
+ extract
+---------
+ -2
(1 row)
--
@@ -1148,135 +1148,112 @@ SELECT EXTRACT(CENTURY FROM NOW())>=21 AS True; -- true
(1 row)
SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
- date_part
------------
- 20
+ extract
+---------
+ 20
(1 row)
--
-- all possible fields
--
SELECT EXTRACT(MICROSECONDS FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "microseconds" not supported
SELECT EXTRACT(MILLISECONDS FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "milliseconds" not supported
SELECT EXTRACT(SECOND FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "second" not supported
SELECT EXTRACT(MINUTE FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "minute" not supported
SELECT EXTRACT(HOUR FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "hour" not supported
SELECT EXTRACT(DAY FROM DATE '2020-08-11');
- date_part
------------
- 11
+ extract
+---------
+ 11
(1 row)
SELECT EXTRACT(MONTH FROM DATE '2020-08-11');
- date_part
------------
- 8
+ extract
+---------
+ 8
(1 row)
SELECT EXTRACT(YEAR FROM DATE '2020-08-11');
- date_part
------------
- 2020
+ extract
+---------
+ 2020
(1 row)
SELECT EXTRACT(DECADE FROM DATE '2020-08-11');
- date_part
------------
- 202
+ extract
+---------
+ 202
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '2020-08-11');
- date_part
------------
- 21
+ extract
+---------
+ 21
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '2020-08-11');
- date_part
------------
- 3
+ extract
+---------
+ 3
(1 row)
SELECT EXTRACT(ISOYEAR FROM DATE '2020-08-11');
- date_part
------------
- 2020
+ extract
+---------
+ 2020
(1 row)
SELECT EXTRACT(QUARTER FROM DATE '2020-08-11');
- date_part
------------
- 3
+ extract
+---------
+ 3
(1 row)
SELECT EXTRACT(WEEK FROM DATE '2020-08-11');
- date_part
------------
- 33
+ extract
+---------
+ 33
(1 row)
SELECT EXTRACT(DOW FROM DATE '2020-08-11');
- date_part
------------
- 2
+ extract
+---------
+ 2
(1 row)
SELECT EXTRACT(ISODOW FROM DATE '2020-08-11');
- date_part
------------
- 2
+ extract
+---------
+ 2
(1 row)
SELECT EXTRACT(DOY FROM DATE '2020-08-11');
- date_part
------------
- 224
+ extract
+---------
+ 224
(1 row)
SELECT EXTRACT(TIMEZONE FROM DATE '2020-08-11');
-ERROR: timestamp units "timezone" not supported
-CONTEXT: SQL function "date_part" statement 1
+ERROR: date units "timezone" not supported
SELECT EXTRACT(TIMEZONE_M FROM DATE '2020-08-11');
-ERROR: timestamp units "timezone_m" not supported
-CONTEXT: SQL function "date_part" statement 1
+ERROR: date units "timezone_m" not supported
SELECT EXTRACT(TIMEZONE_H FROM DATE '2020-08-11');
-ERROR: timestamp units "timezone_h" not supported
-CONTEXT: SQL function "date_part" statement 1
+ERROR: date units "timezone_h" not supported
SELECT EXTRACT(EPOCH FROM DATE '2020-08-11');
- date_part
+ extract
------------
1597104000
(1 row)
SELECT EXTRACT(JULIAN FROM DATE '2020-08-11');
- date_part
------------
- 2459073
+ extract
+---------
+ 2459073
(1 row)
--
@@ -1372,221 +1349,188 @@ select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'
--
-- oscillating fields from non-finite date/timestamptz:
--
-SELECT EXTRACT(HOUR FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+SELECT EXTRACT(DAY FROM DATE 'infinity'); -- NULL
+ extract
+---------
+
(1 row)
-SELECT EXTRACT(HOUR FROM DATE '-infinity'); -- NULL
- date_part
------------
-
+SELECT EXTRACT(DAY FROM DATE '-infinity'); -- NULL
+ extract
+---------
+
(1 row)
SELECT EXTRACT(HOUR FROM TIMESTAMP 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(HOUR FROM TIMESTAMP '-infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(HOUR FROM TIMESTAMPTZ '-infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
-- all possible fields
SELECT EXTRACT(MICROSECONDS FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "microseconds" not supported
SELECT EXTRACT(MILLISECONDS FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "milliseconds" not supported
SELECT EXTRACT(SECOND FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "second" not supported
SELECT EXTRACT(MINUTE FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "minute" not supported
SELECT EXTRACT(HOUR FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "hour" not supported
SELECT EXTRACT(DAY FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(MONTH FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(QUARTER FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(WEEK FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(DOW FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(ISODOW FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(DOY FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(TIMEZONE FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "timezone" not supported
SELECT EXTRACT(TIMEZONE_M FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "timezone_m" not supported
SELECT EXTRACT(TIMEZONE_H FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
+ERROR: date units "timezone_h" not supported
--
-- monotonic fields from non-finite date/timestamptz:
--
SELECT EXTRACT(EPOCH FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(EPOCH FROM DATE '-infinity'); -- -Infinity
- date_part
+ extract
-----------
-Infinity
(1 row)
SELECT EXTRACT(EPOCH FROM TIMESTAMP 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(EPOCH FROM TIMESTAMP '-infinity'); -- -Infinity
- date_part
+ extract
-----------
-Infinity
(1 row)
SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '-infinity'); -- -Infinity
- date_part
+ extract
-----------
-Infinity
(1 row)
-- all possible fields
SELECT EXTRACT(YEAR FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(DECADE FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(CENTURY FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(JULIAN FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(ISOYEAR FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(EPOCH FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
--
-- wrong fields from non-finite date:
--
-SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- ERROR: timestamp units "microsec" not recognized
-ERROR: timestamp units "microsec" not recognized
-CONTEXT: SQL function "date_part" statement 1
+SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- error
+ERROR: date units "microsec" not recognized
-- test constructors
select make_date(2013, 7, 15);
make_date
diff --git a/src/test/regress/expected/interval.out b/src/test/regress/expected/interval.out
index acc353a0dc..80e07fe355 100644
--- a/src/test/regress/expected/interval.out
+++ b/src/test/regress/expected/interval.out
@@ -948,18 +948,18 @@ SELECT f1,
EXTRACT(MILLENNIUM FROM f1) AS MILLENNIUM,
EXTRACT(EPOCH FROM f1) AS EPOCH
FROM INTERVAL_TBL;
- f1 | microsecond | millisecond | second | minute | hour | day | month | quarter | year | decade | century | millennium | epoch
--------------------------------+-------------+-------------+--------+--------+------+-----+-------+---------+------+--------+---------+------------+------------
- @ 1 min | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 60
- @ 5 hours | 0 | 0 | 0 | 0 | 5 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 18000
- @ 10 days | 0 | 0 | 0 | 0 | 0 | 10 | 0 | 1 | 0 | 0 | 0 | 0 | 864000
- @ 34 years | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 34 | 3 | 0 | 0 | 1072958400
- @ 3 mons | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 2 | 0 | 0 | 0 | 0 | 7776000
- @ 14 secs ago | -14000000 | -14000 | -14 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | -14
- @ 1 day 2 hours 3 mins 4 secs | 4000000 | 4000 | 4 | 3 | 2 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 93784
- @ 6 years | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 6 | 0 | 0 | 0 | 189345600
- @ 5 mons | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 12960000
- @ 5 mons 12 hours | 0 | 0 | 0 | 0 | 12 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 13003200
+ f1 | microsecond | millisecond | second | minute | hour | day | month | quarter | year | decade | century | millennium | epoch
+-------------------------------+-------------+-------------+------------+--------+------+-----+-------+---------+------+--------+---------+------------+-------------------
+ @ 1 min | 0 | 0.000 | 0.000000 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 60.000000
+ @ 5 hours | 0 | 0.000 | 0.000000 | 0 | 5 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 18000.000000
+ @ 10 days | 0 | 0.000 | 0.000000 | 0 | 0 | 10 | 0 | 1 | 0 | 0 | 0 | 0 | 864000.000000
+ @ 34 years | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 0 | 1 | 34 | 3 | 0 | 0 | 1072224000.000000
+ @ 3 mons | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 3 | 2 | 0 | 0 | 0 | 0 | 7776000.000000
+ @ 14 secs ago | -14000000 | -14000.000 | -14.000000 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | -14.000000
+ @ 1 day 2 hours 3 mins 4 secs | 4000000 | 4000.000 | 4.000000 | 3 | 2 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 93784.000000
+ @ 6 years | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 0 | 1 | 6 | 0 | 0 | 0 | 189216000.000000
+ @ 5 mons | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 12960000.000000
+ @ 5 mons 12 hours | 0 | 0.000 | 0.000000 | 0 | 12 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 13003200.000000
(10 rows)
SELECT EXTRACT(FORTNIGHT FROM INTERVAL '2 days'); -- error
@@ -967,50 +967,50 @@ ERROR: interval units "fortnight" not recognized
SELECT EXTRACT(TIMEZONE FROM INTERVAL '2 days'); -- error
ERROR: interval units "timezone" not supported
SELECT EXTRACT(DECADE FROM INTERVAL '100 y');
- date_part
------------
- 10
+ extract
+---------
+ 10
(1 row)
SELECT EXTRACT(DECADE FROM INTERVAL '99 y');
- date_part
------------
- 9
+ extract
+---------
+ 9
(1 row)
SELECT EXTRACT(DECADE FROM INTERVAL '-99 y');
- date_part
------------
- -9
+ extract
+---------
+ -9
(1 row)
SELECT EXTRACT(DECADE FROM INTERVAL '-100 y');
- date_part
------------
- -10
+ extract
+---------
+ -10
(1 row)
SELECT EXTRACT(CENTURY FROM INTERVAL '100 y');
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y');
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
diff --git a/src/test/regress/expected/psql_crosstab.out b/src/test/regress/expected/psql_crosstab.out
index eae6fbd051..e09e331016 100644
--- a/src/test/regress/expected/psql_crosstab.out
+++ b/src/test/regress/expected/psql_crosstab.out
@@ -17,12 +17,12 @@ SELECT v, EXTRACT(year FROM d), count(*)
FROM ctv_data
GROUP BY 1, 2
ORDER BY 1, 2;
- v | date_part | count
-----+-----------+-------
- v0 | 2014 | 2
- v0 | 2015 | 1
- v1 | 2015 | 3
- v2 | 2015 | 1
+ v | extract | count
+----+---------+-------
+ v0 | 2014 | 2
+ v0 | 2015 | 1
+ v1 | 2015 | 3
+ v2 | 2015 | 1
(4 rows)
-- basic usage with 3 columns
diff --git a/src/test/regress/expected/time.out b/src/test/regress/expected/time.out
index 5303cc0c94..f9786e0b4e 100644
--- a/src/test/regress/expected/time.out
+++ b/src/test/regress/expected/time.out
@@ -131,33 +131,33 @@ HINT: Could not choose a best candidate operator. You might need to add explici
-- test EXTRACT
--
SELECT EXTRACT(MICROSECOND FROM TIME '2020-05-26 13:30:25.575401');
- date_part
------------
- 25575401
+ extract
+----------
+ 25575401
(1 row)
SELECT EXTRACT(MILLISECOND FROM TIME '2020-05-26 13:30:25.575401');
- date_part
+ extract
-----------
25575.401
(1 row)
SELECT EXTRACT(SECOND FROM TIME '2020-05-26 13:30:25.575401');
- date_part
+ extract
-----------
25.575401
(1 row)
SELECT EXTRACT(MINUTE FROM TIME '2020-05-26 13:30:25.575401');
- date_part
------------
- 30
+ extract
+---------
+ 30
(1 row)
SELECT EXTRACT(HOUR FROM TIME '2020-05-26 13:30:25.575401');
- date_part
------------
- 13
+ extract
+---------
+ 13
(1 row)
SELECT EXTRACT(DAY FROM TIME '2020-05-26 13:30:25.575401'); -- error
@@ -167,7 +167,7 @@ ERROR: "time" units "fortnight" not recognized
SELECT EXTRACT(TIMEZONE FROM TIME '2020-05-26 13:30:25.575401'); -- error
ERROR: "time" units "timezone" not recognized
SELECT EXTRACT(EPOCH FROM TIME '2020-05-26 13:30:25.575401');
- date_part
+ extract
--------------
48625.575401
(1 row)
diff --git a/src/test/regress/expected/timetz.out b/src/test/regress/expected/timetz.out
index 1ab5ed5105..74f8a78a24 100644
--- a/src/test/regress/expected/timetz.out
+++ b/src/test/regress/expected/timetz.out
@@ -148,33 +148,33 @@ HINT: No operator matches the given name and argument types. You might need to
-- test EXTRACT
--
SELECT EXTRACT(MICROSECOND FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
------------
- 25575401
+ extract
+----------
+ 25575401
(1 row)
SELECT EXTRACT(MILLISECOND FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
+ extract
-----------
25575.401
(1 row)
SELECT EXTRACT(SECOND FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
+ extract
-----------
25.575401
(1 row)
SELECT EXTRACT(MINUTE FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
------------
- 30
+ extract
+---------
+ 30
(1 row)
SELECT EXTRACT(HOUR FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
------------
- 13
+ extract
+---------
+ 13
(1 row)
SELECT EXTRACT(DAY FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04'); -- error
@@ -182,25 +182,25 @@ ERROR: "time with time zone" units "day" not recognized
SELECT EXTRACT(FORTNIGHT FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04'); -- error
ERROR: "time with time zone" units "fortnight" not recognized
SELECT EXTRACT(TIMEZONE FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
------------
- -14400
+ extract
+---------
+ -14400
(1 row)
-SELECT EXTRACT(TIMEZONE_HOUR FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
------------
- -4
+SELECT EXTRACT(TIMEZONE_HOUR FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04:30');
+ extract
+---------
+ -4
(1 row)
-SELECT EXTRACT(TIMEZONE_MINUTE FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
------------
- 0
+SELECT EXTRACT(TIMEZONE_MINUTE FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04:30');
+ extract
+---------
+ -30
(1 row)
SELECT EXTRACT(EPOCH FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
+ extract
--------------
63025.575401
(1 row)
diff --git a/src/test/regress/sql/date.sql b/src/test/regress/sql/date.sql
index 7a734fb1a0..eb0a9d8ddd 100644
--- a/src/test/regress/sql/date.sql
+++ b/src/test/regress/sql/date.sql
@@ -329,8 +329,8 @@ CREATE TABLE DATE_TBL (f1 date);
--
-- oscillating fields from non-finite date/timestamptz:
--
-SELECT EXTRACT(HOUR FROM DATE 'infinity'); -- NULL
-SELECT EXTRACT(HOUR FROM DATE '-infinity'); -- NULL
+SELECT EXTRACT(DAY FROM DATE 'infinity'); -- NULL
+SELECT EXTRACT(DAY FROM DATE '-infinity'); -- NULL
SELECT EXTRACT(HOUR FROM TIMESTAMP 'infinity'); -- NULL
SELECT EXTRACT(HOUR FROM TIMESTAMP '-infinity'); -- NULL
SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity'); -- NULL
@@ -371,7 +371,7 @@ CREATE TABLE DATE_TBL (f1 date);
--
-- wrong fields from non-finite date:
--
-SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- ERROR: timestamp units "microsec" not recognized
+SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- error
-- test constructors
select make_date(2013, 7, 15);
diff --git a/src/test/regress/sql/timetz.sql b/src/test/regress/sql/timetz.sql
index ce763d89e8..307bab31fc 100644
--- a/src/test/regress/sql/timetz.sql
+++ b/src/test/regress/sql/timetz.sql
@@ -67,6 +67,6 @@ CREATE TABLE TIMETZ_TBL (f1 time(2) with time zone);
SELECT EXTRACT(DAY FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04'); -- error
SELECT EXTRACT(FORTNIGHT FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04'); -- error
SELECT EXTRACT(TIMEZONE FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
-SELECT EXTRACT(TIMEZONE_HOUR FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
-SELECT EXTRACT(TIMEZONE_MINUTE FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
+SELECT EXTRACT(TIMEZONE_HOUR FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04:30');
+SELECT EXTRACT(TIMEZONE_MINUTE FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04:30');
SELECT EXTRACT(EPOCH FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
base-commit: 27ab1981e7c9b8fcbcb143c5f6f706441a52bbc8
--
2.30.2
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
Well, I had an idea that I put to work. In most of these cases where we
need division, we divide an integer by a power of 10. That can be done
with numeric very quickly by just shifting the weight and scale around.
So I wrote a function that does that specifically (look for
int64_div_fast_to_numeric()). With that, the slow cases I mentioned now
have the same performance as the other cases that didn't have any
numeric division. You just get the overhead for constructing and
passing around a numeric instead of a double, which can't be avoided.
Yeah, I was wondering if we could do something like that, but I hadn't
got as far as figuring a way to deal with divisors not a multiple of
NBASE.
Looking at the proposed code, I wonder if it wouldn't be better to
define the function as taking the base-10-log of the divisor, so that
you'd have the number of digits to shift (and the dscale) immediately
instead of needing repeated integer divisions to get that. Also, the
risk of intermediate overflow here seems annoying:
+ if (unlikely(pg_mul_s64_overflow(val1, NBASE/x, &val1)))
+ elog(ERROR, "overflow");
Maybe that's unreachable for the ranges of inputs the current patch could
create, but it seems like it makes the function distinctly less
general-purpose than one would think from its comment. Maybe, if that
overflows, we could handle the failure by making that adjustment after
we've converted to numeric?
So here is an intermediate patch that does this. I haven't gotten rid
of all numeric_div_opt_error() calls yet, but if this seems acceptable,
I can work on the remaining ones.
I guess the immediate question is how much of a performance gap there
is now between the float and numeric implementations.
regards, tom lane
On 19.03.21 21:06, Tom Lane wrote:
I guess the immediate question is how much of a performance gap there
is now between the float and numeric implementations.
Attached are my test script and the full output.
To summarize, for cases that don't do any interesting computation and
where the overhead is only the data type passing, the difference is like
this:
-- old
select date_part('microseconds', current_timestamp + generate_series(0,
10000000) * interval '1 second') \g /dev/null
Time: 2760.966 ms (00:02.761)
-- new
select extract(microseconds from current_timestamp + generate_series(0,
10000000) * interval '1 second') \g /dev/null
Time: 3178.477 ms (00:03.178)
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
On 19.03.21 21:06, Tom Lane wrote:
I guess the immediate question is how much of a performance gap there
is now between the float and numeric implementations.
Attached are my test script and the full output.
OK ... I prefer to do this sort of timing in a way that's not so
dependent on client I/O speeds, along the lines of
select count(date_part('day', current_date + g * interval '1 day')) from generate_series(0, :N) g;
I applied the v5 patch and ran your test suite that way, producing
the attached results. It looks pretty promising for me, too.
Most of the cases show about 10%-15% degradation:
# select extract, date_part, extract/date_part as ratio, unit from (select sum(msec) filter (where fn = 'extract') as extract, sum(msec) filter (where fn = 'date_part') as date_part, unit from timings group by unit) ss order by ratio;
extract | date_part | ratio | unit
-----------+-----------+------------------------+-----------------
22690.100 | 20705.402 | 1.09585411575201486066 | decade
22810.005 | 20754.296 | 1.09904980636298142804 | century
11238.122 | 10190.385 | 1.10281623314526389337 | timezone_minute
20201.992 | 18303.982 | 1.1036938301184955 | doy
20121.073 | 18206.290 | 1.1051715094069138 | dow
23209.090 | 20915.715 | 1.10964841507928368693 | millennium
18839.455 | 16943.063 | 1.11192734159106886399 | week
20130.843 | 18010.011 | 1.1177585066438882 | isoyear
19755.296 | 17668.497 | 1.11810846163089027890 | isodow
22500.373 | 20112.264 | 1.11873894455641592612 | day
22631.485 | 20200.266 | 1.12035579135443067928 | month
22883.344 | 20407.733 | 1.12130749652594925659 | quarter
22628.524 | 20172.361 | 1.12175882634660365239 | year
26503.545 | 23493.288 | 1.12813263941598979249 | minute
26381.817 | 23329.924 | 1.13081452815705700542 | hour
27236.886 | 24070.860 | 1.13152940941869131390 | microseconds
11563.820 | 9948.148 | 1.1624093248311143 | timezone_hour
27728.212 | 23567.973 | 1.17652086583771968849 | second
28348.328 | 23984.219 | 1.18195751965073367617 | milliseconds
49902.129 | 30798.034 | 1.6203024193037776 | epoch
31544.035 | 18250.745 | 1.7283697186060076 | julian
(21 rows)
The outliers are epoch and julian, which unsurprisingly are the
ones you didn't fix yet.
I think a ten-percent-ish slowdown is acceptable for this purpose,
so I think if you can address the points already raised then we're
pretty much good to go with this.
regards, tom lane
On 19.03.21 21:06, Tom Lane wrote:
Yeah, I was wondering if we could do something like that, but I hadn't
got as far as figuring a way to deal with divisors not a multiple of
NBASE.Looking at the proposed code, I wonder if it wouldn't be better to
define the function as taking the base-10-log of the divisor, so that
you'd have the number of digits to shift (and the dscale) immediately
instead of needing repeated integer divisions to get that.
done that way, much simpler now
Also, the
risk of intermediate overflow here seems annoying:+ if (unlikely(pg_mul_s64_overflow(val1, NBASE/x, &val1))) + elog(ERROR, "overflow");Maybe that's unreachable for the ranges of inputs the current patch could
create, but it seems like it makes the function distinctly less
general-purpose than one would think from its comment. Maybe, if that
overflows, we could handle the failure by making that adjustment after
we've converted to numeric?
also done
I also figured out a way to combine the float8 and numeric
implementations so that there is not so much duplication. Added tests
to cover all the edge and overflow cases.
I think this is solid now.
The extract(julian from timestamp) is still a bit in the slow mode, but
as I previously stated, it's not documented and gives the wrong result,
so it's not clear whether it should be fixed and what it should do. I
think I'll register that part as an open item in any case, to see what
we should do about that.
Attachments:
v6-0001-Change-return-type-of-EXTRACT-to-numeric.patchtext/plain; charset=UTF-8; name=v6-0001-Change-return-type-of-EXTRACT-to-numeric.patch; x-mac-creator=0; x-mac-type=0Download
From d6d28fc50107401ffd9d8ed0e7c641e6ef21fe3c Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter@eisentraut.org>
Date: Thu, 1 Apr 2021 20:12:07 +0200
Subject: [PATCH v6] Change return type of EXTRACT to numeric
The previous implementation of EXTRACT mapped internally to
date_part(), which returned type double precision (since it was
implemented long before the numeric type existed). This can lead to
imprecise output in some cases, so returning numeric would be
preferrable. Changing the return type of an existing function is a
bit risky, so instead we do the following: We implement a new set of
functions, which are now called "extract", in parallel to the existing
date_part functions. They work the same way internally but use
numeric instead of float8. The EXTRACT construct is now mapped by the
parser to these new extract functions. That way, dumps of views
etc. from old versions (which would use date_part) continue to work
unchanged, but new uses will map to the new extract functions.
Additionally, the reverse compilation of EXTRACT now reproduces the
original syntax, using the new mechanism introduced in
40c24bfef92530bd846e111c1742c2a54441c62c.
The following minor changes of behavior result from the new
implementation:
- The column name from an isolated EXTRACT call is now "extract"
instead of "date_part".
- Extract from date now rejects inappropriate field names such as
HOUR. It was previously mapped internally to extract from
timestamp, so it would silently accept everything appropriate for
timestamp.
- Return values when extracting fields with possibly fractional
values, such as second and epoch, now have the full scale that the
value has internally (so, for example, '1.000000' instead of just
'1').
Discussion: https://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu
---
doc/src/sgml/func.sgml | 10 +-
src/backend/parser/gram.y | 2 +-
src/backend/utils/adt/date.c | 308 +++++++++++--
src/backend/utils/adt/numeric.c | 61 +++
src/backend/utils/adt/ruleutils.c | 21 +
src/backend/utils/adt/timestamp.c | 471 ++++++++++++++------
src/include/catalog/pg_proc.dat | 18 +
src/include/utils/numeric.h | 1 +
src/test/regress/expected/create_view.out | 2 +-
src/test/regress/expected/date.out | 452 +++++++++----------
src/test/regress/expected/interval.out | 101 +++--
src/test/regress/expected/psql_crosstab.out | 12 +-
src/test/regress/expected/time.out | 48 +-
src/test/regress/expected/timestamp.out | 91 ++++
src/test/regress/expected/timestamptz.out | 92 ++++
src/test/regress/expected/timetz.out | 60 ++-
src/test/regress/sql/date.sql | 22 +-
src/test/regress/sql/interval.sql | 12 +
src/test/regress/sql/time.sql | 7 +
src/test/regress/sql/timestamp.sql | 13 +
src/test/regress/sql/timestamptz.sql | 13 +
src/test/regress/sql/timetz.sql | 7 +
22 files changed, 1329 insertions(+), 495 deletions(-)
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 3cf243a16a..663f966b97 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -8872,7 +8872,7 @@ <title>Date/Time Functions</title>
<primary>extract</primary>
</indexterm>
<function>extract</function> ( <parameter>field</parameter> <literal>from</literal> <type>timestamp</type> )
- <returnvalue>double precision</returnvalue>
+ <returnvalue>numeric</returnvalue>
</para>
<para>
Get timestamp subfield; see <xref linkend="functions-datetime-extract"/>
@@ -8886,7 +8886,7 @@ <title>Date/Time Functions</title>
<row>
<entry role="func_table_entry"><para role="func_signature">
<function>extract</function> ( <parameter>field</parameter> <literal>from</literal> <type>interval</type> )
- <returnvalue>double precision</returnvalue>
+ <returnvalue>numeric</returnvalue>
</para>
<para>
Get interval subfield; see <xref linkend="functions-datetime-extract"/>
@@ -9401,7 +9401,7 @@ <title><function>EXTRACT</function>, <function>date_part</function></title>
well.) <replaceable>field</replaceable> is an identifier or
string that selects what field to extract from the source value.
The <function>extract</function> function returns values of type
- <type>double precision</type>.
+ <type>numeric</type>.
The following are valid field names:
<!-- alphabetical -->
@@ -9825,6 +9825,10 @@ <title><function>EXTRACT</function>, <function>date_part</function></title>
be a string value, not a name. The valid field names for
<function>date_part</function> are the same as for
<function>extract</function>.
+ For historical reasons, the <function>date_part</function> function
+ returns values of type <type>double precision</type>. This can result in
+ a loss of precision in certain uses. Using <function>extract</function>
+ is recommended instead.
</para>
<screen>
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 8b1bad0d79..05cc2c9ae0 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -14020,7 +14020,7 @@ func_expr_common_subexpr:
{ $$ = makeTypeCast($3, $5, @1); }
| EXTRACT '(' extract_list ')'
{
- $$ = (Node *) makeFuncCall(SystemFuncName("date_part"),
+ $$ = (Node *) makeFuncCall(SystemFuncName("extract"),
$3,
COERCE_SQL_SYNTAX,
@1);
diff --git a/src/backend/utils/adt/date.c b/src/backend/utils/adt/date.c
index 6053d0e8a6..83036e5985 100644
--- a/src/backend/utils/adt/date.c
+++ b/src/backend/utils/adt/date.c
@@ -31,6 +31,7 @@
#include "utils/builtins.h"
#include "utils/date.h"
#include "utils/datetime.h"
+#include "utils/numeric.h"
#include "utils/sortsupport.h"
/*
@@ -1063,6 +1064,182 @@ in_range_date_interval(PG_FUNCTION_ARGS)
}
+/* extract_date()
+ * Extract specified field from date type.
+ */
+Datum
+extract_date(PG_FUNCTION_ARGS)
+{
+ text *units = PG_GETARG_TEXT_PP(0);
+ DateADT date = PG_GETARG_DATEADT(1);
+ int64 intresult;
+ int type,
+ val;
+ char *lowunits;
+ int year,
+ mon,
+ mday;
+
+ lowunits = downcase_truncate_identifier(VARDATA_ANY(units),
+ VARSIZE_ANY_EXHDR(units),
+ false);
+
+ type = DecodeUnits(0, lowunits, &val);
+ if (type == UNKNOWN_FIELD)
+ type = DecodeSpecial(0, lowunits, &val);
+
+ if (DATE_NOT_FINITE(date) && (type == UNITS || type == RESERV))
+ {
+ switch (val)
+ {
+ /* Oscillating units */
+ case DTK_DAY:
+ case DTK_MONTH:
+ case DTK_QUARTER:
+ case DTK_WEEK:
+ case DTK_DOW:
+ case DTK_ISODOW:
+ case DTK_DOY:
+ PG_RETURN_NULL();
+ break;
+
+ /* Monotonically-increasing units */
+ case DTK_YEAR:
+ case DTK_DECADE:
+ case DTK_CENTURY:
+ case DTK_MILLENNIUM:
+ case DTK_JULIAN:
+ case DTK_ISOYEAR:
+ case DTK_EPOCH:
+ if (DATE_IS_NOBEGIN(date))
+ PG_RETURN_NUMERIC(DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("-Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1))));
+ else
+ PG_RETURN_NUMERIC(DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ CStringGetDatum("Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1))));
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("date units \"%s\" not supported",
+ lowunits)));
+ }
+ }
+ else if (type == UNITS)
+ {
+ j2date(date + POSTGRES_EPOCH_JDATE, &year, &mon, &mday);
+
+ switch (val)
+ {
+ case DTK_DAY:
+ intresult = mday;
+ break;
+
+ case DTK_MONTH:
+ intresult = mon;
+ break;
+
+ case DTK_QUARTER:
+ intresult = (mon - 1) / 3 + 1;
+ break;
+
+ case DTK_WEEK:
+ intresult = date2isoweek(year, mon, mday);
+ break;
+
+ case DTK_YEAR:
+ if (year > 0)
+ intresult = year;
+ else
+ /* there is no year 0, just 1 BC and 1 AD */
+ intresult = year - 1;
+ break;
+
+ case DTK_DECADE:
+ /* see comments in timestamp_part */
+ if (year >= 0)
+ intresult = year / 10;
+ else
+ intresult = -((8 - (year - 1)) / 10);
+ break;
+
+ case DTK_CENTURY:
+ /* see comments in timestamp_part */
+ if (year > 0)
+ intresult = (year + 99) / 100;
+ else
+ intresult = -((99 - (year - 1)) / 100);
+ break;
+
+ case DTK_MILLENNIUM:
+ /* see comments in timestamp_part */
+ if (year > 0)
+ intresult = (year + 999) / 1000;
+ else
+ intresult = -((999 - (year - 1)) / 1000);
+ break;
+
+ case DTK_JULIAN:
+ intresult = date + POSTGRES_EPOCH_JDATE;
+ break;
+
+ case DTK_ISOYEAR:
+ intresult = date2isoyear(year, mon, mday);
+ /* Adjust BC years */
+ if (intresult <= 0)
+ intresult -= 1;
+ break;
+
+ case DTK_DOW:
+ case DTK_ISODOW:
+ intresult = j2day(date + POSTGRES_EPOCH_JDATE);
+ if (val == DTK_ISODOW && intresult == 0)
+ intresult = 7;
+ break;
+
+ case DTK_DOY:
+ intresult = date2j(year, mon, mday) - date2j(year, 1, 1) + 1;
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("date units \"%s\" not supported",
+ lowunits)));
+ intresult = 0;
+ }
+ }
+ else if (type == RESERV)
+ {
+ switch (val)
+ {
+ case DTK_EPOCH:
+ intresult = ((int64) date + POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * SECS_PER_DAY;
+ break;
+
+ default:
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("date units \"%s\" not supported",
+ lowunits)));
+ intresult = 0;
+ }
+ }
+ else
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("date units \"%s\" not recognized", lowunits)));
+ intresult = 0;
+ }
+
+ PG_RETURN_NUMERIC(int64_to_numeric(intresult));
+}
+
+
/* Add an interval to a date, giving a new date.
* Must handle both positive and negative intervals.
*
@@ -1949,15 +2126,15 @@ in_range_time_interval(PG_FUNCTION_ARGS)
}
-/* time_part()
+/* time_part() and extract_time()
* Extract specified field from time type.
*/
-Datum
-time_part(PG_FUNCTION_ARGS)
+static Datum
+time_part_common(PG_FUNCTION_ARGS, bool retnumeric)
{
text *units = PG_GETARG_TEXT_PP(0);
TimeADT time = PG_GETARG_TIMEADT(1);
- float8 result;
+ int64 intresult;
int type,
val;
char *lowunits;
@@ -1981,23 +2158,37 @@ time_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_MICROSEC:
- result = tm->tm_sec * 1000000.0 + fsec;
+ intresult = tm->tm_sec * 1000000 + fsec;
break;
case DTK_MILLISEC:
- result = tm->tm_sec * 1000.0 + fsec / 1000.0;
+ if (retnumeric)
+ /*---
+ * tm->tm_sec * 1000 + fsec / 1000
+ * = (tm->tm_sec * 1'000'000 + fsec) / 1000
+ */
+ PG_RETURN_NUMERIC(int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 3));
+ else
+ PG_RETURN_FLOAT8(tm->tm_sec * 1000.0 + fsec / 1000.0);
break;
case DTK_SECOND:
- result = tm->tm_sec + fsec / 1000000.0;
+ if (retnumeric)
+ /*---
+ * tm->tm_sec + fsec / 1'000'000
+ * = (tm->tm_sec * 1'000'000 + fsec) / 1'000'000
+ */
+ PG_RETURN_NUMERIC(int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 6));
+ else
+ PG_RETURN_FLOAT8(tm->tm_sec + fsec / 1000000.0);
break;
case DTK_MINUTE:
- result = tm->tm_min;
+ intresult = tm->tm_min;
break;
case DTK_HOUR:
- result = tm->tm_hour;
+ intresult = tm->tm_hour;
break;
case DTK_TZ:
@@ -2016,12 +2207,15 @@ time_part(PG_FUNCTION_ARGS)
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("\"time\" units \"%s\" not recognized",
lowunits)));
- result = 0;
+ intresult = 0;
}
}
else if (type == RESERV && val == DTK_EPOCH)
{
- result = time / 1000000.0;
+ if (retnumeric)
+ PG_RETURN_NUMERIC(int64_div_fast_to_numeric(time, 6));
+ else
+ PG_RETURN_FLOAT8(time / 1000000.0);
}
else
{
@@ -2029,10 +2223,25 @@ time_part(PG_FUNCTION_ARGS)
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("\"time\" units \"%s\" not recognized",
lowunits)));
- result = 0;
+ intresult = 0;
}
- PG_RETURN_FLOAT8(result);
+ if (retnumeric)
+ PG_RETURN_NUMERIC(int64_to_numeric(intresult));
+ else
+ PG_RETURN_FLOAT8(intresult);
+}
+
+Datum
+time_part(PG_FUNCTION_ARGS)
+{
+ return time_part_common(fcinfo, false);
+}
+
+Datum
+extract_time(PG_FUNCTION_ARGS)
+{
+ return time_part_common(fcinfo, true);
}
@@ -2686,15 +2895,15 @@ datetimetz_timestamptz(PG_FUNCTION_ARGS)
}
-/* timetz_part()
+/* timetz_part() and extract_timetz()
* Extract specified field from time type.
*/
-Datum
-timetz_part(PG_FUNCTION_ARGS)
+static Datum
+timetz_part_common(PG_FUNCTION_ARGS, bool retnumeric)
{
text *units = PG_GETARG_TEXT_PP(0);
TimeTzADT *time = PG_GETARG_TIMETZADT_P(1);
- float8 result;
+ int64 intresult;
int type,
val;
char *lowunits;
@@ -2709,7 +2918,6 @@ timetz_part(PG_FUNCTION_ARGS)
if (type == UNITS)
{
- double dummy;
int tz;
fsec_t fsec;
struct pg_tm tt,
@@ -2720,38 +2928,49 @@ timetz_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_TZ:
- result = -tz;
+ intresult = -tz;
break;
case DTK_TZ_MINUTE:
- result = -tz;
- result /= SECS_PER_MINUTE;
- FMODULO(result, dummy, (double) MINS_PER_HOUR);
+ intresult = (-tz / SECS_PER_MINUTE) % MINS_PER_HOUR;
break;
case DTK_TZ_HOUR:
- dummy = -tz;
- FMODULO(dummy, result, (double) SECS_PER_HOUR);
+ intresult = -tz / SECS_PER_HOUR;
break;
case DTK_MICROSEC:
- result = tm->tm_sec * 1000000.0 + fsec;
+ intresult = tm->tm_sec * 1000000 + fsec;
break;
case DTK_MILLISEC:
- result = tm->tm_sec * 1000.0 + fsec / 1000.0;
+ if (retnumeric)
+ /*---
+ * tm->tm_sec * 1000 + fsec / 1000
+ * = (tm->tm_sec * 1'000'000 + fsec) / 1000
+ */
+ PG_RETURN_NUMERIC(int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 3));
+ else
+ PG_RETURN_FLOAT8(tm->tm_sec * 1000.0 + fsec / 1000.0);
break;
case DTK_SECOND:
- result = tm->tm_sec + fsec / 1000000.0;
+ if (retnumeric)
+ /*---
+ * tm->tm_sec + fsec / 1'000'000
+ * = (tm->tm_sec * 1'000'000 + fsec) / 1'000'000
+ */
+ PG_RETURN_NUMERIC(int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 6));
+ else
+ PG_RETURN_FLOAT8(tm->tm_sec + fsec / 1000000.0);
break;
case DTK_MINUTE:
- result = tm->tm_min;
+ intresult = tm->tm_min;
break;
case DTK_HOUR:
- result = tm->tm_hour;
+ intresult = tm->tm_hour;
break;
case DTK_DAY:
@@ -2766,12 +2985,19 @@ timetz_part(PG_FUNCTION_ARGS)
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("\"time with time zone\" units \"%s\" not recognized",
lowunits)));
- result = 0;
+ intresult = 0;
}
}
else if (type == RESERV && val == DTK_EPOCH)
{
- result = time->time / 1000000.0 + time->zone;
+ if (retnumeric)
+ /*---
+ * time->time / 1'000'000 + time->zone
+ * = (time->time + time->zone * 1'000'000) / 1'000'000
+ */
+ PG_RETURN_NUMERIC(int64_div_fast_to_numeric(time->time + time->zone * 1000000LL, 6));
+ else
+ PG_RETURN_FLOAT8(time->time / 1000000.0 + time->zone);
}
else
{
@@ -2779,10 +3005,26 @@ timetz_part(PG_FUNCTION_ARGS)
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("\"time with time zone\" units \"%s\" not recognized",
lowunits)));
- result = 0;
+ intresult = 0;
}
- PG_RETURN_FLOAT8(result);
+ if (retnumeric)
+ PG_RETURN_NUMERIC(int64_to_numeric(intresult));
+ else
+ PG_RETURN_FLOAT8(intresult);
+}
+
+
+Datum
+timetz_part(PG_FUNCTION_ARGS)
+{
+ return timetz_part_common(fcinfo, false);
+}
+
+Datum
+extract_timetz(PG_FUNCTION_ARGS)
+{
+ return timetz_part_common(fcinfo, true);
}
/* timetz_zone()
diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c
index 682200f636..9525ade1f7 100644
--- a/src/backend/utils/adt/numeric.c
+++ b/src/backend/utils/adt/numeric.c
@@ -4092,6 +4092,67 @@ int64_to_numeric(int64 val)
return res;
}
+/*
+ * Convert val1/(10**val2) to numeric. This is much faster than normal
+ * numeric division.
+ */
+Numeric
+int64_div_fast_to_numeric(int64 val1, int log10val2)
+{
+ Numeric res;
+ NumericVar result;
+ int64 saved_val1 = val1;
+ int w;
+ int m;
+
+ /* how much to decrease the weight by */
+ w = log10val2 / DEC_DIGITS;
+ /* how much is left */
+ m = log10val2 % DEC_DIGITS;
+
+ /*
+ * If there is anything left, multiply the dividend by what's left, then
+ * shift the weight by one more.
+ */
+ if (m > 0)
+ {
+ static int pow10[] = {1, 10, 100, 1000};
+
+ StaticAssertStmt(lengthof(pow10) == DEC_DIGITS, "mismatch with DEC_DIGITS");
+ if (unlikely(pg_mul_s64_overflow(val1, pow10[DEC_DIGITS - m], &val1)))
+ {
+ /*
+ * If it doesn't fit, do the whole computation in numeric the slow
+ * way. Note that va1l may have been overwritten, so use
+ * saved_val1 instead.
+ */
+ int val2 = 1;
+
+ for (int i = 0; i < log10val2; i++)
+ val2 *= 10;
+ res = numeric_div_opt_error(int64_to_numeric(saved_val1), int64_to_numeric(val2), NULL);
+ res = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(res),
+ Int32GetDatum(log10val2)));
+ return res;
+ }
+ w++;
+ }
+
+ init_var(&result);
+
+ int64_to_numericvar(val1, &result);
+
+ result.weight -= w;
+ result.dscale += w * DEC_DIGITS - (DEC_DIGITS - m);
+
+ res = make_result(&result);
+
+ free_var(&result);
+
+ return res;
+}
+
Datum
int4_numeric(PG_FUNCTION_ARGS)
{
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 254e8f3050..0b5314e49b 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -9782,6 +9782,27 @@ get_func_sql_syntax(FuncExpr *expr, deparse_context *context)
appendStringInfoString(buf, "))");
return true;
+ case F_EXTRACT_TEXT_DATE:
+ case F_EXTRACT_TEXT_TIME:
+ case F_EXTRACT_TEXT_TIMETZ:
+ case F_EXTRACT_TEXT_TIMESTAMP:
+ case F_EXTRACT_TEXT_TIMESTAMPTZ:
+ case F_EXTRACT_TEXT_INTERVAL:
+ /* EXTRACT (x FROM y) */
+ appendStringInfoString(buf, "EXTRACT(");
+ {
+ Const *con = (Const *) linitial(expr->args);
+
+ Assert(IsA(con, Const) &&
+ con->consttype == TEXTOID &&
+ !con->constisnull);
+ appendStringInfoString(buf, TextDatumGetCString(con->constvalue));
+ }
+ appendStringInfoString(buf, " FROM ");
+ get_rule_expr((Node *) lsecond(expr->args), context, false);
+ appendStringInfoChar(buf, ')');
+ return true;
+
case F_IS_NORMALIZED:
/* IS xxx NORMALIZED */
appendStringInfoString(buf, "((");
diff --git a/src/backend/utils/adt/timestamp.c b/src/backend/utils/adt/timestamp.c
index 194861f19e..b2bdbcab57 100644
--- a/src/backend/utils/adt/timestamp.c
+++ b/src/backend/utils/adt/timestamp.c
@@ -22,6 +22,7 @@
#include "access/xact.h"
#include "catalog/pg_type.h"
+#include "common/int.h"
#include "common/int128.h"
#include "funcapi.h"
#include "libpq/pqformat.h"
@@ -35,6 +36,7 @@
#include "utils/date.h"
#include "utils/datetime.h"
#include "utils/float.h"
+#include "utils/numeric.h"
/*
* gcc's -ffast-math switch breaks routines that expect exact results from
@@ -3991,8 +3993,8 @@ timestamptz_bin(PG_FUNCTION_ARGS)
{
Interval *stride = PG_GETARG_INTERVAL_P(0);
TimestampTz timestamp = PG_GETARG_TIMESTAMPTZ(1);
- TimestampTz origin = PG_GETARG_TIMESTAMPTZ(2);
- TimestampTz result,
+ TimestampTz origin = PG_GETARG_TIMESTAMPTZ(2);
+ TimestampTz result,
stride_usecs,
tm_diff,
tm_delta;
@@ -4597,15 +4599,15 @@ NonFiniteTimestampTzPart(int type, int unit, char *lowunits,
}
}
-/* timestamp_part()
+/* timestamp_part() and extract_timestamp()
* Extract specified field from timestamp.
*/
-Datum
-timestamp_part(PG_FUNCTION_ARGS)
+static Datum
+timestamp_part_common(PG_FUNCTION_ARGS, bool retnumeric)
{
text *units = PG_GETARG_TEXT_PP(0);
Timestamp timestamp = PG_GETARG_TIMESTAMP(1);
- float8 result;
+ int64 intresult;
Timestamp epoch;
int type,
val;
@@ -4624,11 +4626,28 @@ timestamp_part(PG_FUNCTION_ARGS)
if (TIMESTAMP_NOT_FINITE(timestamp))
{
- result = NonFiniteTimestampTzPart(type, val, lowunits,
- TIMESTAMP_IS_NOBEGIN(timestamp),
- false);
- if (result)
- PG_RETURN_FLOAT8(result);
+ double r = NonFiniteTimestampTzPart(type, val, lowunits,
+ TIMESTAMP_IS_NOBEGIN(timestamp),
+ false);
+
+ if (r)
+ {
+ if (retnumeric)
+ {
+ if (r < 0)
+ return DirectFunctionCall3(numeric_in,
+ CStringGetDatum("-Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1));
+ else if (r > 0)
+ return DirectFunctionCall3(numeric_in,
+ CStringGetDatum("Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1));
+ }
+ else
+ PG_RETURN_FLOAT8(r);
+ }
else
PG_RETURN_NULL();
}
@@ -4643,47 +4662,61 @@ timestamp_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_MICROSEC:
- result = tm->tm_sec * 1000000.0 + fsec;
+ intresult = tm->tm_sec * 1000000.0 + fsec;
break;
case DTK_MILLISEC:
- result = tm->tm_sec * 1000.0 + fsec / 1000.0;
+ if (retnumeric)
+ /*---
+ * tm->tm_sec * 1000 + fsec / 1000
+ * = (tm->tm_sec * 1'000'000 + fsec) / 1000
+ */
+ PG_RETURN_NUMERIC(int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 3));
+ else
+ PG_RETURN_FLOAT8(tm->tm_sec * 1000.0 + fsec / 1000.0);
break;
case DTK_SECOND:
- result = tm->tm_sec + fsec / 1000000.0;
+ if (retnumeric)
+ /*---
+ * tm->tm_sec + fsec / 1'000'000
+ * = (tm->tm_sec * 1'000'000 + fsec) / 1'000'000
+ */
+ PG_RETURN_NUMERIC(int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 6));
+ else
+ PG_RETURN_FLOAT8(tm->tm_sec + fsec / 1000000.0);
break;
case DTK_MINUTE:
- result = tm->tm_min;
+ intresult = tm->tm_min;
break;
case DTK_HOUR:
- result = tm->tm_hour;
+ intresult = tm->tm_hour;
break;
case DTK_DAY:
- result = tm->tm_mday;
+ intresult = tm->tm_mday;
break;
case DTK_MONTH:
- result = tm->tm_mon;
+ intresult = tm->tm_mon;
break;
case DTK_QUARTER:
- result = (tm->tm_mon - 1) / 3 + 1;
+ intresult = (tm->tm_mon - 1) / 3 + 1;
break;
case DTK_WEEK:
- result = (float8) date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ intresult = date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday);
break;
case DTK_YEAR:
if (tm->tm_year > 0)
- result = tm->tm_year;
+ intresult = tm->tm_year;
else
/* there is no year 0, just 1 BC and 1 AD */
- result = tm->tm_year - 1;
+ intresult = tm->tm_year - 1;
break;
case DTK_DECADE:
@@ -4694,9 +4727,9 @@ timestamp_part(PG_FUNCTION_ARGS)
* is 11 BC thru 2 BC...
*/
if (tm->tm_year >= 0)
- result = tm->tm_year / 10;
+ intresult = tm->tm_year / 10;
else
- result = -((8 - (tm->tm_year - 1)) / 10);
+ intresult = -((8 - (tm->tm_year - 1)) / 10);
break;
case DTK_CENTURY:
@@ -4708,43 +4741,50 @@ timestamp_part(PG_FUNCTION_ARGS)
* ----
*/
if (tm->tm_year > 0)
- result = (tm->tm_year + 99) / 100;
+ intresult = (tm->tm_year + 99) / 100;
else
/* caution: C division may have negative remainder */
- result = -((99 - (tm->tm_year - 1)) / 100);
+ intresult = -((99 - (tm->tm_year - 1)) / 100);
break;
case DTK_MILLENNIUM:
/* see comments above. */
if (tm->tm_year > 0)
- result = (tm->tm_year + 999) / 1000;
+ intresult = (tm->tm_year + 999) / 1000;
else
- result = -((999 - (tm->tm_year - 1)) / 1000);
+ intresult = -((999 - (tm->tm_year - 1)) / 1000);
break;
case DTK_JULIAN:
- result = date2j(tm->tm_year, tm->tm_mon, tm->tm_mday);
- result += ((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) +
- tm->tm_sec + (fsec / 1000000.0)) / (double) SECS_PER_DAY;
+ if (retnumeric)
+ PG_RETURN_NUMERIC(numeric_add_opt_error(int64_to_numeric(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)),
+ numeric_div_opt_error(int64_to_numeric(((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) + tm->tm_sec) * 1000000LL + fsec),
+ int64_to_numeric(SECS_PER_DAY * 1000000LL),
+ NULL),
+ NULL));
+ else
+ PG_RETURN_FLOAT8(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday) +
+ ((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) +
+ tm->tm_sec + (fsec / 1000000.0)) / (double) SECS_PER_DAY);
break;
case DTK_ISOYEAR:
- result = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ intresult = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
/* Adjust BC years */
- if (result <= 0)
- result -= 1;
+ if (intresult <= 0)
+ intresult -= 1;
break;
case DTK_DOW:
case DTK_ISODOW:
- result = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
- if (val == DTK_ISODOW && result == 0)
- result = 7;
+ intresult = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
+ if (val == DTK_ISODOW && intresult == 0)
+ intresult = 7;
break;
case DTK_DOY:
- result = (date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
- - date2j(tm->tm_year, 1, 1) + 1);
+ intresult = (date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
+ - date2j(tm->tm_year, 1, 1) + 1);
break;
case DTK_TZ:
@@ -4755,7 +4795,7 @@ timestamp_part(PG_FUNCTION_ARGS)
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("timestamp units \"%s\" not supported",
lowunits)));
- result = 0;
+ intresult = 0;
}
}
else if (type == RESERV)
@@ -4764,11 +4804,37 @@ timestamp_part(PG_FUNCTION_ARGS)
{
case DTK_EPOCH:
epoch = SetEpochTimestamp();
- /* try to avoid precision loss in subtraction */
- if (timestamp < (PG_INT64_MAX + epoch))
- result = (timestamp - epoch) / 1000000.0;
+ /* (timestamp - epoch) / 1000000 */
+ if (retnumeric)
+ {
+ Numeric result;
+
+ if (timestamp < (PG_INT64_MAX + epoch))
+ result = int64_div_fast_to_numeric(timestamp - epoch, 6);
+ else
+ {
+ result = numeric_div_opt_error(numeric_sub_opt_error(int64_to_numeric(timestamp),
+ int64_to_numeric(epoch),
+ NULL),
+ int64_to_numeric(1000000),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
+ }
+ PG_RETURN_NUMERIC(result);
+ }
else
- result = ((float8) timestamp - epoch) / 1000000.0;
+ {
+ float8 result;
+
+ /* try to avoid precision loss in subtraction */
+ if (timestamp < (PG_INT64_MAX + epoch))
+ result = (timestamp - epoch) / 1000000.0;
+ else
+ result = ((float8) timestamp - epoch) / 1000000.0;
+ PG_RETURN_FLOAT8(result);
+ }
break;
default:
@@ -4776,7 +4842,7 @@ timestamp_part(PG_FUNCTION_ARGS)
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("timestamp units \"%s\" not supported",
lowunits)));
- result = 0;
+ intresult = 0;
}
}
@@ -4785,27 +4851,41 @@ timestamp_part(PG_FUNCTION_ARGS)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("timestamp units \"%s\" not recognized", lowunits)));
- result = 0;
+ intresult = 0;
}
- PG_RETURN_FLOAT8(result);
+ if (retnumeric)
+ PG_RETURN_NUMERIC(int64_to_numeric(intresult));
+ else
+ PG_RETURN_FLOAT8(intresult);
+}
+
+Datum
+timestamp_part(PG_FUNCTION_ARGS)
+{
+ return timestamp_part_common(fcinfo, false);
}
-/* timestamptz_part()
+Datum
+extract_timestamp(PG_FUNCTION_ARGS)
+{
+ return timestamp_part_common(fcinfo, true);
+}
+
+/* timestamptz_part() and extract_timestamptz()
* Extract specified field from timestamp with time zone.
*/
-Datum
-timestamptz_part(PG_FUNCTION_ARGS)
+static Datum
+timestamptz_part_common(PG_FUNCTION_ARGS, bool retnumeric)
{
text *units = PG_GETARG_TEXT_PP(0);
TimestampTz timestamp = PG_GETARG_TIMESTAMPTZ(1);
- float8 result;
+ int64 intresult;
Timestamp epoch;
int tz;
int type,
val;
char *lowunits;
- double dummy;
fsec_t fsec;
struct pg_tm tt,
*tm = &tt;
@@ -4820,11 +4900,28 @@ timestamptz_part(PG_FUNCTION_ARGS)
if (TIMESTAMP_NOT_FINITE(timestamp))
{
- result = NonFiniteTimestampTzPart(type, val, lowunits,
- TIMESTAMP_IS_NOBEGIN(timestamp),
- true);
- if (result)
- PG_RETURN_FLOAT8(result);
+ double r = NonFiniteTimestampTzPart(type, val, lowunits,
+ TIMESTAMP_IS_NOBEGIN(timestamp),
+ true);
+
+ if (r)
+ {
+ if (retnumeric)
+ {
+ if (r < 0)
+ return DirectFunctionCall3(numeric_in,
+ CStringGetDatum("-Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1));
+ else if (r > 0)
+ return DirectFunctionCall3(numeric_in,
+ CStringGetDatum("Infinity"),
+ ObjectIdGetDatum(InvalidOid),
+ Int32GetDatum(-1));
+ }
+ else
+ PG_RETURN_FLOAT8(r);
+ }
else
PG_RETURN_NULL();
}
@@ -4839,111 +4936,129 @@ timestamptz_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_TZ:
- result = -tz;
+ intresult = -tz;
break;
case DTK_TZ_MINUTE:
- result = -tz;
- result /= SECS_PER_MINUTE;
- FMODULO(result, dummy, (double) MINS_PER_HOUR);
+ intresult = (-tz / SECS_PER_MINUTE) % MINS_PER_HOUR;
break;
case DTK_TZ_HOUR:
- dummy = -tz;
- FMODULO(dummy, result, (double) SECS_PER_HOUR);
+ intresult = -tz / SECS_PER_HOUR;
break;
case DTK_MICROSEC:
- result = tm->tm_sec * 1000000.0 + fsec;
+ intresult = tm->tm_sec * 1000000 + fsec;
break;
case DTK_MILLISEC:
- result = tm->tm_sec * 1000.0 + fsec / 1000.0;
+ if (retnumeric)
+ /*---
+ * tm->tm_sec * 1000 + fsec / 1000
+ * = (tm->tm_sec * 1'000'000 + fsec) / 1000
+ */
+ PG_RETURN_NUMERIC(int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 3));
+ else
+ PG_RETURN_FLOAT8(tm->tm_sec * 1000.0 + fsec / 1000.0);
break;
case DTK_SECOND:
- result = tm->tm_sec + fsec / 1000000.0;
+ if (retnumeric)
+ /*---
+ * tm->tm_sec + fsec / 1'000'000
+ * = (tm->tm_sec * 1'000'000 + fsec) / 1'000'000
+ */
+ PG_RETURN_NUMERIC(int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 6));
+ else
+ PG_RETURN_FLOAT8(tm->tm_sec + fsec / 1000000.0);
break;
case DTK_MINUTE:
- result = tm->tm_min;
+ intresult = tm->tm_min;
break;
case DTK_HOUR:
- result = tm->tm_hour;
+ intresult = tm->tm_hour;
break;
case DTK_DAY:
- result = tm->tm_mday;
+ intresult = tm->tm_mday;
break;
case DTK_MONTH:
- result = tm->tm_mon;
+ intresult = tm->tm_mon;
break;
case DTK_QUARTER:
- result = (tm->tm_mon - 1) / 3 + 1;
+ intresult = (tm->tm_mon - 1) / 3 + 1;
break;
case DTK_WEEK:
- result = (float8) date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ intresult = date2isoweek(tm->tm_year, tm->tm_mon, tm->tm_mday);
break;
case DTK_YEAR:
if (tm->tm_year > 0)
- result = tm->tm_year;
+ intresult = tm->tm_year;
else
/* there is no year 0, just 1 BC and 1 AD */
- result = tm->tm_year - 1;
+ intresult = tm->tm_year - 1;
break;
case DTK_DECADE:
/* see comments in timestamp_part */
if (tm->tm_year > 0)
- result = tm->tm_year / 10;
+ intresult = tm->tm_year / 10;
else
- result = -((8 - (tm->tm_year - 1)) / 10);
+ intresult = -((8 - (tm->tm_year - 1)) / 10);
break;
case DTK_CENTURY:
/* see comments in timestamp_part */
if (tm->tm_year > 0)
- result = (tm->tm_year + 99) / 100;
+ intresult = (tm->tm_year + 99) / 100;
else
- result = -((99 - (tm->tm_year - 1)) / 100);
+ intresult = -((99 - (tm->tm_year - 1)) / 100);
break;
case DTK_MILLENNIUM:
/* see comments in timestamp_part */
if (tm->tm_year > 0)
- result = (tm->tm_year + 999) / 1000;
+ intresult = (tm->tm_year + 999) / 1000;
else
- result = -((999 - (tm->tm_year - 1)) / 1000);
+ intresult = -((999 - (tm->tm_year - 1)) / 1000);
break;
case DTK_JULIAN:
- result = date2j(tm->tm_year, tm->tm_mon, tm->tm_mday);
- result += ((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) +
- tm->tm_sec + (fsec / 1000000.0)) / (double) SECS_PER_DAY;
+ if (retnumeric)
+ PG_RETURN_NUMERIC(numeric_add_opt_error(int64_to_numeric(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)),
+ numeric_div_opt_error(int64_to_numeric(((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) + tm->tm_sec) * 1000000LL + fsec),
+ int64_to_numeric(SECS_PER_DAY * 1000000LL),
+ NULL),
+ NULL));
+ else
+ PG_RETURN_FLOAT8(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday) +
+ ((((tm->tm_hour * MINS_PER_HOUR) + tm->tm_min) * SECS_PER_MINUTE) +
+ tm->tm_sec + (fsec / 1000000.0)) / (double) SECS_PER_DAY);
break;
case DTK_ISOYEAR:
- result = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
+ intresult = date2isoyear(tm->tm_year, tm->tm_mon, tm->tm_mday);
/* Adjust BC years */
- if (result <= 0)
- result -= 1;
+ if (intresult <= 0)
+ intresult -= 1;
break;
case DTK_DOW:
case DTK_ISODOW:
- result = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
- if (val == DTK_ISODOW && result == 0)
- result = 7;
+ intresult = j2day(date2j(tm->tm_year, tm->tm_mon, tm->tm_mday));
+ if (val == DTK_ISODOW && intresult == 0)
+ intresult = 7;
break;
case DTK_DOY:
- result = (date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
- - date2j(tm->tm_year, 1, 1) + 1);
+ intresult = (date2j(tm->tm_year, tm->tm_mon, tm->tm_mday)
+ - date2j(tm->tm_year, 1, 1) + 1);
break;
default:
@@ -4951,7 +5066,7 @@ timestamptz_part(PG_FUNCTION_ARGS)
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("timestamp with time zone units \"%s\" not supported",
lowunits)));
- result = 0;
+ intresult = 0;
}
}
@@ -4961,11 +5076,37 @@ timestamptz_part(PG_FUNCTION_ARGS)
{
case DTK_EPOCH:
epoch = SetEpochTimestamp();
- /* try to avoid precision loss in subtraction */
- if (timestamp < (PG_INT64_MAX + epoch))
- result = (timestamp - epoch) / 1000000.0;
+ /* (timestamp - epoch) / 1000000 */
+ if (retnumeric)
+ {
+ Numeric result;
+
+ if (timestamp < (PG_INT64_MAX + epoch))
+ result = int64_div_fast_to_numeric(timestamp - epoch, 6);
+ else
+ {
+ result = numeric_div_opt_error(numeric_sub_opt_error(int64_to_numeric(timestamp),
+ int64_to_numeric(epoch),
+ NULL),
+ int64_to_numeric(1000000),
+ NULL);
+ result = DatumGetNumeric(DirectFunctionCall2(numeric_round,
+ NumericGetDatum(result),
+ Int32GetDatum(6)));
+ }
+ PG_RETURN_NUMERIC(result);
+ }
else
- result = ((float8) timestamp - epoch) / 1000000.0;
+ {
+ float8 result;
+
+ /* try to avoid precision loss in subtraction */
+ if (timestamp < (PG_INT64_MAX + epoch))
+ result = (timestamp - epoch) / 1000000.0;
+ else
+ result = ((float8) timestamp - epoch) / 1000000.0;
+ PG_RETURN_FLOAT8(result);
+ }
break;
default:
@@ -4973,7 +5114,7 @@ timestamptz_part(PG_FUNCTION_ARGS)
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("timestamp with time zone units \"%s\" not supported",
lowunits)));
- result = 0;
+ intresult = 0;
}
}
else
@@ -4983,22 +5124,37 @@ timestamptz_part(PG_FUNCTION_ARGS)
errmsg("timestamp with time zone units \"%s\" not recognized",
lowunits)));
- result = 0;
+ intresult = 0;
}
- PG_RETURN_FLOAT8(result);
+ if (retnumeric)
+ PG_RETURN_NUMERIC(int64_to_numeric(intresult));
+ else
+ PG_RETURN_FLOAT8(intresult);
+}
+
+Datum
+timestamptz_part(PG_FUNCTION_ARGS)
+{
+ return timestamptz_part_common(fcinfo, false);
+}
+
+Datum
+extract_timestamptz(PG_FUNCTION_ARGS)
+{
+ return timestamptz_part_common(fcinfo, true);
}
-/* interval_part()
+/* interval_part() and extract_interval()
* Extract specified field from interval.
*/
-Datum
-interval_part(PG_FUNCTION_ARGS)
+static Datum
+interval_part_common(PG_FUNCTION_ARGS, bool retnumeric)
{
text *units = PG_GETARG_TEXT_PP(0);
Interval *interval = PG_GETARG_INTERVAL_P(1);
- float8 result;
+ int64 intresult;
int type,
val;
char *lowunits;
@@ -5021,54 +5177,68 @@ interval_part(PG_FUNCTION_ARGS)
switch (val)
{
case DTK_MICROSEC:
- result = tm->tm_sec * 1000000.0 + fsec;
+ intresult = tm->tm_sec * 1000000 + fsec;
break;
case DTK_MILLISEC:
- result = tm->tm_sec * 1000.0 + fsec / 1000.0;
+ if (retnumeric)
+ /*---
+ * tm->tm_sec * 1000 + fsec / 1000
+ * = (tm->tm_sec * 1'000'000 + fsec) / 1000
+ */
+ PG_RETURN_NUMERIC(int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 3));
+ else
+ PG_RETURN_FLOAT8(tm->tm_sec * 1000.0 + fsec / 1000.0);
break;
case DTK_SECOND:
- result = tm->tm_sec + fsec / 1000000.0;
+ if (retnumeric)
+ /*---
+ * tm->tm_sec + fsec / 1'000'000
+ * = (tm->tm_sec * 1'000'000 + fsec) / 1'000'000
+ */
+ PG_RETURN_NUMERIC(int64_div_fast_to_numeric(tm->tm_sec * 1000000LL + fsec, 6));
+ else
+ PG_RETURN_FLOAT8(tm->tm_sec + fsec / 1000000.0);
break;
case DTK_MINUTE:
- result = tm->tm_min;
+ intresult = tm->tm_min;
break;
case DTK_HOUR:
- result = tm->tm_hour;
+ intresult = tm->tm_hour;
break;
case DTK_DAY:
- result = tm->tm_mday;
+ intresult = tm->tm_mday;
break;
case DTK_MONTH:
- result = tm->tm_mon;
+ intresult = tm->tm_mon;
break;
case DTK_QUARTER:
- result = (tm->tm_mon / 3) + 1;
+ intresult = (tm->tm_mon / 3) + 1;
break;
case DTK_YEAR:
- result = tm->tm_year;
+ intresult = tm->tm_year;
break;
case DTK_DECADE:
/* caution: C division may have negative remainder */
- result = tm->tm_year / 10;
+ intresult = tm->tm_year / 10;
break;
case DTK_CENTURY:
/* caution: C division may have negative remainder */
- result = tm->tm_year / 100;
+ intresult = tm->tm_year / 100;
break;
case DTK_MILLENNIUM:
/* caution: C division may have negative remainder */
- result = tm->tm_year / 1000;
+ intresult = tm->tm_year / 1000;
break;
default:
@@ -5076,22 +5246,60 @@ interval_part(PG_FUNCTION_ARGS)
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("interval units \"%s\" not supported",
lowunits)));
- result = 0;
+ intresult = 0;
}
-
}
else
{
elog(ERROR, "could not convert interval to tm");
- result = 0;
+ intresult = 0;
}
}
else if (type == RESERV && val == DTK_EPOCH)
{
- result = interval->time / 1000000.0;
- result += ((double) DAYS_PER_YEAR * SECS_PER_DAY) * (interval->month / MONTHS_PER_YEAR);
- result += ((double) DAYS_PER_MONTH * SECS_PER_DAY) * (interval->month % MONTHS_PER_YEAR);
- result += ((double) SECS_PER_DAY) * interval->day;
+ if (retnumeric)
+ {
+ Numeric result;
+ int64 secs_from_day_month;
+ int64 val;
+
+ /* this always fits into int64 */
+ secs_from_day_month = ((int64) DAYS_PER_YEAR * (interval->month / MONTHS_PER_YEAR) +
+ (int64) DAYS_PER_MONTH * (interval->month % MONTHS_PER_YEAR) +
+ interval->day) * SECS_PER_DAY;
+
+ /*---
+ * result = secs_from_day_month + interval->time / 1'000'000
+ * = (secs_from_day_month * 1'000'000 + interval->time) / 1'000'000
+ */
+
+ /*
+ * Try the computation inside int64; if it overflows, do it in
+ * numeric (slower). This overflow happens around 10^9 days, so
+ * not common in practice.
+ */
+ if (!pg_mul_s64_overflow(secs_from_day_month, 1000000, &val) &&
+ !pg_add_s64_overflow(val, interval->time, &val))
+ result = int64_div_fast_to_numeric(val, 6);
+ else
+ result =
+ numeric_add_opt_error(int64_div_fast_to_numeric(interval->time, 6),
+ int64_to_numeric(secs_from_day_month),
+ NULL);
+
+ PG_RETURN_NUMERIC(result);
+ }
+ else
+ {
+ float8 result;
+
+ result = interval->time / 1000000.0;
+ result += ((double) DAYS_PER_YEAR * SECS_PER_DAY) * (interval->month / MONTHS_PER_YEAR);
+ result += ((double) DAYS_PER_MONTH * SECS_PER_DAY) * (interval->month % MONTHS_PER_YEAR);
+ result += ((double) SECS_PER_DAY) * interval->day;
+
+ PG_RETURN_FLOAT8(result);
+ }
}
else
{
@@ -5099,10 +5307,25 @@ interval_part(PG_FUNCTION_ARGS)
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("interval units \"%s\" not recognized",
lowunits)));
- result = 0;
+ intresult = 0;
}
- PG_RETURN_FLOAT8(result);
+ if (retnumeric)
+ PG_RETURN_NUMERIC(int64_to_numeric(intresult));
+ else
+ PG_RETURN_FLOAT8(intresult);
+}
+
+Datum
+interval_part(PG_FUNCTION_ARGS)
+{
+ return interval_part_common(fcinfo, false);
+}
+
+Datum
+extract_interval(PG_FUNCTION_ARGS)
+{
+ return interval_part_common(fcinfo, true);
}
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 69ffd0c3f4..bf3d6afda9 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -2339,9 +2339,15 @@
{ oid => '1171', descr => 'extract field from timestamp with time zone',
proname => 'date_part', provolatile => 's', prorettype => 'float8',
proargtypes => 'text timestamptz', prosrc => 'timestamptz_part' },
+{ oid => '9983', descr => 'extract field from timestamp with time zone',
+ proname => 'extract', provolatile => 's', prorettype => 'numeric',
+ proargtypes => 'text timestamptz', prosrc => 'extract_timestamptz' },
{ oid => '1172', descr => 'extract field from interval',
proname => 'date_part', prorettype => 'float8',
proargtypes => 'text interval', prosrc => 'interval_part' },
+{ oid => '9984', descr => 'extract field from interval',
+ proname => 'extract', prorettype => 'numeric', proargtypes => 'text interval',
+ prosrc => 'extract_interval' },
{ oid => '1174', descr => 'convert date to timestamp with time zone',
proname => 'timestamptz', provolatile => 's', prorettype => 'timestamptz',
proargtypes => 'date', prosrc => 'date_timestamptz' },
@@ -2489,6 +2495,9 @@
{ oid => '1273', descr => 'extract field from time with time zone',
proname => 'date_part', prorettype => 'float8', proargtypes => 'text timetz',
prosrc => 'timetz_part' },
+{ oid => '9981', descr => 'extract field from time with time zone',
+ proname => 'extract', prorettype => 'numeric', proargtypes => 'text timetz',
+ prosrc => 'extract_timetz' },
{ oid => '1274',
proname => 'int84pl', prorettype => 'int8', proargtypes => 'int8 int4',
prosrc => 'int84pl' },
@@ -2834,9 +2843,15 @@
proname => 'date_part', prolang => 'sql', prorettype => 'float8',
proargtypes => 'text date',
prosrc => 'select pg_catalog.date_part($1, cast($2 as timestamp without time zone))' },
+{ oid => '9979', descr => 'extract field from date',
+ proname => 'extract', prorettype => 'numeric', proargtypes => 'text date',
+ prosrc => 'extract_date' },
{ oid => '1385', descr => 'extract field from time',
proname => 'date_part', prorettype => 'float8', proargtypes => 'text time',
prosrc => 'time_part' },
+{ oid => '9980', descr => 'extract field from time',
+ proname => 'extract', prorettype => 'numeric', proargtypes => 'text time',
+ prosrc => 'extract_time' },
{ oid => '1386',
descr => 'date difference from today preserving months and years',
proname => 'age', prolang => 'sql', provolatile => 's',
@@ -5835,6 +5850,9 @@
{ oid => '2021', descr => 'extract field from timestamp',
proname => 'date_part', prorettype => 'float8',
proargtypes => 'text timestamp', prosrc => 'timestamp_part' },
+{ oid => '9982', descr => 'extract field from timestamp',
+ proname => 'extract', prorettype => 'numeric',
+ proargtypes => 'text timestamp', prosrc => 'extract_timestamp' },
{ oid => '2024', descr => 'convert date to timestamp',
proname => 'timestamp', prorettype => 'timestamp', proargtypes => 'date',
prosrc => 'date_timestamp' },
diff --git a/src/include/utils/numeric.h b/src/include/utils/numeric.h
index a362b5beb2..dfc8688ca2 100644
--- a/src/include/utils/numeric.h
+++ b/src/include/utils/numeric.h
@@ -63,6 +63,7 @@ extern char *numeric_out_sci(Numeric num, int scale);
extern char *numeric_normalize(Numeric num);
extern Numeric int64_to_numeric(int64 val);
+extern Numeric int64_div_fast_to_numeric(int64 val1, int log10val2);
extern Numeric numeric_add_opt_error(Numeric num1, Numeric num2,
bool *have_error);
diff --git a/src/test/regress/expected/create_view.out b/src/test/regress/expected/create_view.out
index 87fd2fbfd0..f50ef76685 100644
--- a/src/test/regress/expected/create_view.out
+++ b/src/test/regress/expected/create_view.out
@@ -1787,7 +1787,7 @@ select
select pg_get_viewdef('tt201v', true);
pg_get_viewdef
-----------------------------------------------------------------------------------------------
- SELECT date_part('day'::text, now()) AS extr, +
+ SELECT EXTRACT(day FROM now()) AS extr, +
((now(), '@ 1 day'::interval) OVERLAPS (CURRENT_TIMESTAMP(2), '@ 1 day'::interval)) AS o,+
(('foo'::text) IS NORMALIZED) AS isn, +
(('foo'::text) IS NFKC NORMALIZED) AS isnn, +
diff --git a/src/test/regress/expected/date.out b/src/test/regress/expected/date.out
index 83ad111f1c..c8b0566ff4 100644
--- a/src/test/regress/expected/date.out
+++ b/src/test/regress/expected/date.out
@@ -966,66 +966,66 @@ SELECT f1 as "date",
-- epoch
--
SELECT EXTRACT(EPOCH FROM DATE '1970-01-01'); -- 0
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
--
-- century
--
SELECT EXTRACT(CENTURY FROM DATE '0101-12-31 BC'); -- -2
- date_part
------------
- -2
+ extract
+---------
+ -2
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '0100-12-31 BC'); -- -1
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '0001-12-31 BC'); -- -1
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '0001-01-01'); -- 1
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '0001-01-01 AD'); -- 1
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '1900-12-31'); -- 19
- date_part
------------
- 19
+ extract
+---------
+ 19
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '1901-01-01'); -- 20
- date_part
------------
- 20
+ extract
+---------
+ 20
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '2000-12-31'); -- 20
- date_part
------------
- 20
+ extract
+---------
+ 20
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '2001-01-01'); -- 21
- date_part
------------
- 21
+ extract
+---------
+ 21
(1 row)
SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True; -- true
@@ -1038,217 +1038,218 @@ SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True; -- true
-- millennium
--
SELECT EXTRACT(MILLENNIUM FROM DATE '0001-12-31 BC'); -- -1
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '0001-01-01 AD'); -- 1
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '1000-12-31'); -- 1
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '1001-01-01'); -- 2
- date_part
------------
- 2
+ extract
+---------
+ 2
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '2000-12-31'); -- 2
- date_part
------------
- 2
+ extract
+---------
+ 2
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '2001-01-01'); -- 3
- date_part
------------
- 3
+ extract
+---------
+ 3
(1 row)
-- next test to be fixed on the turn of the next millennium;-)
SELECT EXTRACT(MILLENNIUM FROM CURRENT_DATE); -- 3
- date_part
------------
- 3
+ extract
+---------
+ 3
(1 row)
--
-- decade
--
SELECT EXTRACT(DECADE FROM DATE '1994-12-25'); -- 199
- date_part
------------
- 199
+ extract
+---------
+ 199
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0010-01-01'); -- 1
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0009-12-31'); -- 0
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0001-01-01 BC'); -- 0
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0002-12-31 BC'); -- -1
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0011-01-01 BC'); -- -1
- date_part
------------
- -1
+ extract
+---------
+ -1
(1 row)
SELECT EXTRACT(DECADE FROM DATE '0012-12-31 BC'); -- -2
- date_part
------------
- -2
+ extract
+---------
+ -2
(1 row)
--
-- all possible fields
--
SELECT EXTRACT(MICROSECONDS FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "microseconds" not supported
SELECT EXTRACT(MILLISECONDS FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "milliseconds" not supported
SELECT EXTRACT(SECOND FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "second" not supported
SELECT EXTRACT(MINUTE FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "minute" not supported
SELECT EXTRACT(HOUR FROM DATE '2020-08-11');
- date_part
------------
- 0
-(1 row)
-
+ERROR: date units "hour" not supported
SELECT EXTRACT(DAY FROM DATE '2020-08-11');
- date_part
------------
- 11
+ extract
+---------
+ 11
(1 row)
SELECT EXTRACT(MONTH FROM DATE '2020-08-11');
- date_part
------------
- 8
+ extract
+---------
+ 8
(1 row)
SELECT EXTRACT(YEAR FROM DATE '2020-08-11');
- date_part
------------
- 2020
+ extract
+---------
+ 2020
+(1 row)
+
+SELECT EXTRACT(YEAR FROM DATE '2020-08-11 BC');
+ extract
+---------
+ -2020
(1 row)
SELECT EXTRACT(DECADE FROM DATE '2020-08-11');
- date_part
------------
- 202
+ extract
+---------
+ 202
(1 row)
SELECT EXTRACT(CENTURY FROM DATE '2020-08-11');
- date_part
------------
- 21
+ extract
+---------
+ 21
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE '2020-08-11');
- date_part
------------
- 3
+ extract
+---------
+ 3
(1 row)
SELECT EXTRACT(ISOYEAR FROM DATE '2020-08-11');
- date_part
------------
- 2020
+ extract
+---------
+ 2020
+(1 row)
+
+SELECT EXTRACT(ISOYEAR FROM DATE '2020-08-11 BC');
+ extract
+---------
+ -2020
(1 row)
SELECT EXTRACT(QUARTER FROM DATE '2020-08-11');
- date_part
------------
- 3
+ extract
+---------
+ 3
(1 row)
SELECT EXTRACT(WEEK FROM DATE '2020-08-11');
- date_part
------------
- 33
+ extract
+---------
+ 33
(1 row)
SELECT EXTRACT(DOW FROM DATE '2020-08-11');
- date_part
------------
- 2
+ extract
+---------
+ 2
+(1 row)
+
+SELECT EXTRACT(DOW FROM DATE '2020-08-16');
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(ISODOW FROM DATE '2020-08-11');
- date_part
------------
- 2
+ extract
+---------
+ 2
+(1 row)
+
+SELECT EXTRACT(ISODOW FROM DATE '2020-08-16');
+ extract
+---------
+ 7
(1 row)
SELECT EXTRACT(DOY FROM DATE '2020-08-11');
- date_part
------------
- 224
+ extract
+---------
+ 224
(1 row)
SELECT EXTRACT(TIMEZONE FROM DATE '2020-08-11');
-ERROR: timestamp units "timezone" not supported
-CONTEXT: SQL function "date_part" statement 1
+ERROR: date units "timezone" not supported
SELECT EXTRACT(TIMEZONE_M FROM DATE '2020-08-11');
-ERROR: timestamp units "timezone_m" not supported
-CONTEXT: SQL function "date_part" statement 1
+ERROR: date units "timezone_m" not supported
SELECT EXTRACT(TIMEZONE_H FROM DATE '2020-08-11');
-ERROR: timestamp units "timezone_h" not supported
-CONTEXT: SQL function "date_part" statement 1
+ERROR: date units "timezone_h" not supported
SELECT EXTRACT(EPOCH FROM DATE '2020-08-11');
- date_part
+ extract
------------
1597104000
(1 row)
SELECT EXTRACT(JULIAN FROM DATE '2020-08-11');
- date_part
------------
- 2459073
+ extract
+---------
+ 2459073
(1 row)
--
@@ -1344,173 +1345,124 @@ select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'
--
-- oscillating fields from non-finite date:
--
-SELECT EXTRACT(HOUR FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
-SELECT EXTRACT(HOUR FROM DATE '-infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
--- all possible fields
-SELECT EXTRACT(MICROSECONDS FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
-SELECT EXTRACT(MILLISECONDS FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
-SELECT EXTRACT(SECOND FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
-SELECT EXTRACT(MINUTE FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+SELECT EXTRACT(DAY FROM DATE 'infinity'); -- NULL
+ extract
+---------
+
(1 row)
-SELECT EXTRACT(HOUR FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+SELECT EXTRACT(DAY FROM DATE '-infinity'); -- NULL
+ extract
+---------
+
(1 row)
+-- all supported fields
SELECT EXTRACT(DAY FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(MONTH FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(QUARTER FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(WEEK FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(DOW FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(ISODOW FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
SELECT EXTRACT(DOY FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
-SELECT EXTRACT(TIMEZONE FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
-SELECT EXTRACT(TIMEZONE_M FROM DATE 'infinity'); -- NULL
- date_part
------------
-
-(1 row)
-
-SELECT EXTRACT(TIMEZONE_H FROM DATE 'infinity'); -- NULL
- date_part
------------
-
+ extract
+---------
+
(1 row)
--
-- monotonic fields from non-finite date:
--
SELECT EXTRACT(EPOCH FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(EPOCH FROM DATE '-infinity'); -- -Infinity
- date_part
+ extract
-----------
-Infinity
(1 row)
--- all possible fields
+-- all supported fields
SELECT EXTRACT(YEAR FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(DECADE FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(CENTURY FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(MILLENNIUM FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(JULIAN FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(ISOYEAR FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
SELECT EXTRACT(EPOCH FROM DATE 'infinity'); -- Infinity
- date_part
------------
- Infinity
+ extract
+----------
+ Infinity
(1 row)
--
-- wrong fields from non-finite date:
--
-SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- ERROR: timestamp units "microsec" not recognized
-ERROR: timestamp units "microsec" not recognized
-CONTEXT: SQL function "date_part" statement 1
+SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- error
+ERROR: date units "microsec" not recognized
-- test constructors
select make_date(2013, 7, 15);
make_date
diff --git a/src/test/regress/expected/interval.out b/src/test/regress/expected/interval.out
index c5ffa9f2cc..0191949137 100644
--- a/src/test/regress/expected/interval.out
+++ b/src/test/regress/expected/interval.out
@@ -948,18 +948,18 @@ SELECT f1,
EXTRACT(MILLENNIUM FROM f1) AS MILLENNIUM,
EXTRACT(EPOCH FROM f1) AS EPOCH
FROM INTERVAL_TBL;
- f1 | microsecond | millisecond | second | minute | hour | day | month | quarter | year | decade | century | millennium | epoch
--------------------------------+-------------+-------------+--------+--------+------+-----+-------+---------+------+--------+---------+------------+------------
- @ 1 min | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 60
- @ 5 hours | 0 | 0 | 0 | 0 | 5 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 18000
- @ 10 days | 0 | 0 | 0 | 0 | 0 | 10 | 0 | 1 | 0 | 0 | 0 | 0 | 864000
- @ 34 years | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 34 | 3 | 0 | 0 | 1072958400
- @ 3 mons | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 2 | 0 | 0 | 0 | 0 | 7776000
- @ 14 secs ago | -14000000 | -14000 | -14 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | -14
- @ 1 day 2 hours 3 mins 4 secs | 4000000 | 4000 | 4 | 3 | 2 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 93784
- @ 6 years | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 6 | 0 | 0 | 0 | 189345600
- @ 5 mons | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 12960000
- @ 5 mons 12 hours | 0 | 0 | 0 | 0 | 12 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 13003200
+ f1 | microsecond | millisecond | second | minute | hour | day | month | quarter | year | decade | century | millennium | epoch
+-------------------------------+-------------+-------------+------------+--------+------+-----+-------+---------+------+--------+---------+------------+-------------------
+ @ 1 min | 0 | 0.000 | 0.000000 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 60.000000
+ @ 5 hours | 0 | 0.000 | 0.000000 | 0 | 5 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 18000.000000
+ @ 10 days | 0 | 0.000 | 0.000000 | 0 | 0 | 10 | 0 | 1 | 0 | 0 | 0 | 0 | 864000.000000
+ @ 34 years | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 0 | 1 | 34 | 3 | 0 | 0 | 1072224000.000000
+ @ 3 mons | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 3 | 2 | 0 | 0 | 0 | 0 | 7776000.000000
+ @ 14 secs ago | -14000000 | -14000.000 | -14.000000 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | -14.000000
+ @ 1 day 2 hours 3 mins 4 secs | 4000000 | 4000.000 | 4.000000 | 3 | 2 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 93784.000000
+ @ 6 years | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 0 | 1 | 6 | 0 | 0 | 0 | 189216000.000000
+ @ 5 mons | 0 | 0.000 | 0.000000 | 0 | 0 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 12960000.000000
+ @ 5 mons 12 hours | 0 | 0.000 | 0.000000 | 0 | 12 | 0 | 5 | 2 | 0 | 0 | 0 | 0 | 13003200.000000
(10 rows)
SELECT EXTRACT(FORTNIGHT FROM INTERVAL '2 days'); -- error
@@ -967,50 +967,79 @@ ERROR: interval units "fortnight" not recognized
SELECT EXTRACT(TIMEZONE FROM INTERVAL '2 days'); -- error
ERROR: interval units "timezone" not supported
SELECT EXTRACT(DECADE FROM INTERVAL '100 y');
- date_part
------------
- 10
+ extract
+---------
+ 10
(1 row)
SELECT EXTRACT(DECADE FROM INTERVAL '99 y');
- date_part
------------
- 9
+ extract
+---------
+ 9
(1 row)
SELECT EXTRACT(DECADE FROM INTERVAL '-99 y');
- date_part
------------
- -9
+ extract
+---------
+ -9
(1 row)
SELECT EXTRACT(DECADE FROM INTERVAL '-100 y');
- date_part
------------
- -10
+ extract
+---------
+ -10
(1 row)
SELECT EXTRACT(CENTURY FROM INTERVAL '100 y');
- date_part
------------
- 1
+ extract
+---------
+ 1
(1 row)
SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');
- date_part
------------
- 0
+ extract
+---------
+ 0
(1 row)
SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y');
- date_part
------------
- -1
+ extract
+---------
+ -1
+(1 row)
+
+-- date_part implementation is mostly the same as extract, so only
+-- test a few cases for additional coverage.
+SELECT f1,
+ date_part('microsecond', f1) AS microsecond,
+ date_part('millisecond', f1) AS millisecond,
+ date_part('second', f1) AS second,
+ date_part('epoch', f1) AS epoch
+ FROM INTERVAL_TBL;
+ f1 | microsecond | millisecond | second | epoch
+-------------------------------+-------------+-------------+--------+------------
+ @ 1 min | 0 | 0 | 0 | 60
+ @ 5 hours | 0 | 0 | 0 | 18000
+ @ 10 days | 0 | 0 | 0 | 864000
+ @ 34 years | 0 | 0 | 0 | 1072958400
+ @ 3 mons | 0 | 0 | 0 | 7776000
+ @ 14 secs ago | -14000000 | -14000 | -14 | -14
+ @ 1 day 2 hours 3 mins 4 secs | 4000000 | 4000 | 4 | 93784
+ @ 6 years | 0 | 0 | 0 | 189345600
+ @ 5 mons | 0 | 0 | 0 | 12960000
+ @ 5 mons 12 hours | 0 | 0 | 0 | 13003200
+(10 rows)
+
+-- internal overflow test case
+SELECT extract(epoch from interval '1000000000 days');
+ extract
+-----------------------
+ 86400000000000.000000
(1 row)
diff --git a/src/test/regress/expected/psql_crosstab.out b/src/test/regress/expected/psql_crosstab.out
index eae6fbd051..e09e331016 100644
--- a/src/test/regress/expected/psql_crosstab.out
+++ b/src/test/regress/expected/psql_crosstab.out
@@ -17,12 +17,12 @@ SELECT v, EXTRACT(year FROM d), count(*)
FROM ctv_data
GROUP BY 1, 2
ORDER BY 1, 2;
- v | date_part | count
-----+-----------+-------
- v0 | 2014 | 2
- v0 | 2015 | 1
- v1 | 2015 | 3
- v2 | 2015 | 1
+ v | extract | count
+----+---------+-------
+ v0 | 2014 | 2
+ v0 | 2015 | 1
+ v1 | 2015 | 3
+ v2 | 2015 | 1
(4 rows)
-- basic usage with 3 columns
diff --git a/src/test/regress/expected/time.out b/src/test/regress/expected/time.out
index 5303cc0c94..39b409feca 100644
--- a/src/test/regress/expected/time.out
+++ b/src/test/regress/expected/time.out
@@ -131,33 +131,33 @@ HINT: Could not choose a best candidate operator. You might need to add explici
-- test EXTRACT
--
SELECT EXTRACT(MICROSECOND FROM TIME '2020-05-26 13:30:25.575401');
- date_part
------------
- 25575401
+ extract
+----------
+ 25575401
(1 row)
SELECT EXTRACT(MILLISECOND FROM TIME '2020-05-26 13:30:25.575401');
- date_part
+ extract
-----------
25575.401
(1 row)
SELECT EXTRACT(SECOND FROM TIME '2020-05-26 13:30:25.575401');
- date_part
+ extract
-----------
25.575401
(1 row)
SELECT EXTRACT(MINUTE FROM TIME '2020-05-26 13:30:25.575401');
- date_part
------------
- 30
+ extract
+---------
+ 30
(1 row)
SELECT EXTRACT(HOUR FROM TIME '2020-05-26 13:30:25.575401');
- date_part
------------
- 13
+ extract
+---------
+ 13
(1 row)
SELECT EXTRACT(DAY FROM TIME '2020-05-26 13:30:25.575401'); -- error
@@ -167,6 +167,32 @@ ERROR: "time" units "fortnight" not recognized
SELECT EXTRACT(TIMEZONE FROM TIME '2020-05-26 13:30:25.575401'); -- error
ERROR: "time" units "timezone" not recognized
SELECT EXTRACT(EPOCH FROM TIME '2020-05-26 13:30:25.575401');
+ extract
+--------------
+ 48625.575401
+(1 row)
+
+-- date_part implementation is mostly the same as extract, so only
+-- test a few cases for additional coverage.
+SELECT date_part('microsecond', TIME '2020-05-26 13:30:25.575401');
+ date_part
+-----------
+ 25575401
+(1 row)
+
+SELECT date_part('millisecond', TIME '2020-05-26 13:30:25.575401');
+ date_part
+-----------
+ 25575.401
+(1 row)
+
+SELECT date_part('second', TIME '2020-05-26 13:30:25.575401');
+ date_part
+-----------
+ 25.575401
+(1 row)
+
+SELECT date_part('epoch', TIME '2020-05-26 13:30:25.575401');
date_part
--------------
48625.575401
diff --git a/src/test/regress/expected/timestamp.out b/src/test/regress/expected/timestamp.out
index fff3cf08c1..690656dfb2 100644
--- a/src/test/regress/expected/timestamp.out
+++ b/src/test/regress/expected/timestamp.out
@@ -1012,6 +1012,84 @@ SELECT d1 as "timestamp",
Mon Jan 01 17:32:01 2001 | 200 | 21 | 3 | 2451912 | 978370321
(65 rows)
+-- extract implementation is mostly the same as date_part, so only
+-- test a few cases for additional coverage.
+SELECT d1 as "timestamp",
+ extract(microseconds from d1) AS microseconds,
+ extract(milliseconds from d1) AS milliseconds,
+ extract(seconds from d1) AS seconds,
+ round(extract(julian from d1)) AS julian,
+ extract(epoch from d1) AS epoch
+ FROM TIMESTAMP_TBL;
+ timestamp | microseconds | milliseconds | seconds | julian | epoch
+-----------------------------+--------------+--------------+-----------+-----------+---------------------
+ -infinity | | | | -Infinity | -Infinity
+ infinity | | | | Infinity | Infinity
+ Thu Jan 01 00:00:00 1970 | 0 | 0.000 | 0.000000 | 2440588 | 0.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Mon Feb 10 17:32:02 1997 | 2000000 | 2000.000 | 2.000000 | 2450491 | 855595922.000000
+ Mon Feb 10 17:32:01.4 1997 | 1400000 | 1400.000 | 1.400000 | 2450491 | 855595921.400000
+ Mon Feb 10 17:32:01.5 1997 | 1500000 | 1500.000 | 1.500000 | 2450491 | 855595921.500000
+ Mon Feb 10 17:32:01.6 1997 | 1600000 | 1600.000 | 1.600000 | 2450491 | 855595921.600000
+ Thu Jan 02 00:00:00 1997 | 0 | 0.000 | 0.000000 | 2450451 | 852163200.000000
+ Thu Jan 02 03:04:05 1997 | 5000000 | 5000.000 | 5.000000 | 2450451 | 852174245.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Tue Jun 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450611 | 865963921.000000
+ Sat Sep 22 18:19:20 2001 | 20000000 | 20000.000 | 20.000000 | 2452176 | 1001182760.000000
+ Wed Mar 15 08:14:01 2000 | 1000000 | 1000.000 | 1.000000 | 2451619 | 953108041.000000
+ Wed Mar 15 13:14:02 2000 | 2000000 | 2000.000 | 2.000000 | 2451620 | 953126042.000000
+ Wed Mar 15 12:14:03 2000 | 3000000 | 3000.000 | 3.000000 | 2451620 | 953122443.000000
+ Wed Mar 15 03:14:04 2000 | 4000000 | 4000.000 | 4.000000 | 2451619 | 953090044.000000
+ Wed Mar 15 02:14:05 2000 | 5000000 | 5000.000 | 5.000000 | 2451619 | 953086445.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Mon Feb 10 17:32:00 1997 | 0 | 0.000 | 0.000000 | 2450491 | 855595920.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Tue Jun 10 18:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450611 | 865967521.000000
+ Mon Feb 10 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450491 | 855595921.000000
+ Tue Feb 11 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450492 | 855682321.000000
+ Wed Feb 12 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450493 | 855768721.000000
+ Thu Feb 13 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450494 | 855855121.000000
+ Fri Feb 14 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450495 | 855941521.000000
+ Sat Feb 15 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450496 | 856027921.000000
+ Sun Feb 16 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450497 | 856114321.000000
+ Tue Feb 16 17:32:01 0097 BC | 1000000 | 1000.000 | 1.000000 | 1686043 | -65192711279.000000
+ Sat Feb 16 17:32:01 0097 | 1000000 | 1000.000 | 1.000000 | 1756537 | -59102029679.000000
+ Thu Feb 16 17:32:01 0597 | 1000000 | 1000.000 | 1.000000 | 1939158 | -43323575279.000000
+ Tue Feb 16 17:32:01 1097 | 1000000 | 1000.000 | 1.000000 | 2121779 | -27545120879.000000
+ Sat Feb 16 17:32:01 1697 | 1000000 | 1000.000 | 1.000000 | 2340925 | -8610906479.000000
+ Thu Feb 16 17:32:01 1797 | 1000000 | 1000.000 | 1.000000 | 2377449 | -5455232879.000000
+ Tue Feb 16 17:32:01 1897 | 1000000 | 1000.000 | 1.000000 | 2413973 | -2299559279.000000
+ Sun Feb 16 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450497 | 856114321.000000
+ Sat Feb 16 17:32:01 2097 | 1000000 | 1000.000 | 1.000000 | 2487022 | 4011874321.000000
+ Wed Feb 28 17:32:01 1996 | 1000000 | 1000.000 | 1.000000 | 2450143 | 825528721.000000
+ Thu Feb 29 17:32:01 1996 | 1000000 | 1000.000 | 1.000000 | 2450144 | 825615121.000000
+ Fri Mar 01 17:32:01 1996 | 1000000 | 1000.000 | 1.000000 | 2450145 | 825701521.000000
+ Mon Dec 30 17:32:01 1996 | 1000000 | 1000.000 | 1.000000 | 2450449 | 851967121.000000
+ Tue Dec 31 17:32:01 1996 | 1000000 | 1000.000 | 1.000000 | 2450450 | 852053521.000000
+ Wed Jan 01 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450451 | 852139921.000000
+ Fri Feb 28 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450509 | 857151121.000000
+ Sat Mar 01 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450510 | 857237521.000000
+ Tue Dec 30 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450814 | 883503121.000000
+ Wed Dec 31 17:32:01 1997 | 1000000 | 1000.000 | 1.000000 | 2450815 | 883589521.000000
+ Fri Dec 31 17:32:01 1999 | 1000000 | 1000.000 | 1.000000 | 2451545 | 946661521.000000
+ Sat Jan 01 17:32:01 2000 | 1000000 | 1000.000 | 1.000000 | 2451546 | 946747921.000000
+ Sun Dec 31 17:32:01 2000 | 1000000 | 1000.000 | 1.000000 | 2451911 | 978283921.000000
+ Mon Jan 01 17:32:01 2001 | 1000000 | 1000.000 | 1.000000 | 2451912 | 978370321.000000
+(65 rows)
+
-- value near upper bound uses special case in code
SELECT date_part('epoch', '294270-01-01 00:00:00'::timestamp);
date_part
@@ -1019,6 +1097,19 @@ SELECT date_part('epoch', '294270-01-01 00:00:00'::timestamp);
9224097091200
(1 row)
+SELECT extract(epoch from '294270-01-01 00:00:00'::timestamp);
+ extract
+----------------------
+ 9224097091200.000000
+(1 row)
+
+-- another internal overflow test case
+SELECT extract(epoch from '5000-01-01 00:00:00'::timestamp);
+ extract
+--------------------
+ 95617584000.000000
+(1 row)
+
-- TO_CHAR()
SELECT to_char(d1, 'DAY Day day DY Dy dy MONTH Month month RM MON Mon mon')
FROM TIMESTAMP_TBL;
diff --git a/src/test/regress/expected/timestamptz.out b/src/test/regress/expected/timestamptz.out
index ff07008073..421ef2578a 100644
--- a/src/test/regress/expected/timestamptz.out
+++ b/src/test/regress/expected/timestamptz.out
@@ -1189,6 +1189,85 @@ SELECT d1 as timestamptz,
Mon Jan 01 17:32:01 2001 PST | -28800 | -8 | 0
(66 rows)
+-- extract implementation is mostly the same as date_part, so only
+-- test a few cases for additional coverage.
+SELECT d1 as "timestamp",
+ extract(microseconds from d1) AS microseconds,
+ extract(milliseconds from d1) AS milliseconds,
+ extract(seconds from d1) AS seconds,
+ round(extract(julian from d1)) AS julian,
+ extract(epoch from d1) AS epoch
+ FROM TIMESTAMPTZ_TBL;
+ timestamp | microseconds | milliseconds | seconds | julian | epoch
+---------------------------------+--------------+--------------+-----------+-----------+---------------------
+ -infinity | | | | -Infinity | -Infinity
+ infinity | | | | Infinity | Infinity
+ Wed Dec 31 16:00:00 1969 PST | 0 | 0.000 | 0.000000 | 2440588 | 0.000000
+ Mon Feb 10 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450491 | 855624721.000000
+ Mon Feb 10 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450491 | 855624721.000000
+ Mon Feb 10 17:32:02 1997 PST | 2000000 | 2000.000 | 2.000000 | 2450491 | 855624722.000000
+ Mon Feb 10 17:32:01.4 1997 PST | 1400000 | 1400.000 | 1.400000 | 2450491 | 855624721.400000
+ Mon Feb 10 17:32:01.5 1997 PST | 1500000 | 1500.000 | 1.500000 | 2450491 | 855624721.500000
+ Mon Feb 10 17:32:01.6 1997 PST | 1600000 | 1600.000 | 1.600000 | 2450491 | 855624721.600000
+ Thu Jan 02 00:00:00 1997 PST | 0 | 0.000 | 0.000000 | 2450451 | 852192000.000000
+ Thu Jan 02 03:04:05 1997 PST | 5000000 | 5000.000 | 5.000000 | 2450451 | 852203045.000000
+ Mon Feb 10 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450491 | 855624721.000000
+ Mon Feb 10 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450491 | 855624721.000000
+ Mon Feb 10 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450491 | 855624721.000000
+ Mon Feb 10 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450491 | 855624721.000000
+ Tue Jun 10 17:32:01 1997 PDT | 1000000 | 1000.000 | 1.000000 | 2450611 | 865989121.000000
+ Sat Sep 22 18:19:20 2001 PDT | 20000000 | 20000.000 | 20.000000 | 2452176 | 1001207960.000000
+ Wed Mar 15 08:14:01 2000 PST | 1000000 | 1000.000 | 1.000000 | 2451619 | 953136841.000000
+ Wed Mar 15 04:14:02 2000 PST | 2000000 | 2000.000 | 2.000000 | 2451619 | 953122442.000000
+ Wed Mar 15 02:14:03 2000 PST | 3000000 | 3000.000 | 3.000000 | 2451619 | 953115243.000000
+ Wed Mar 15 03:14:04 2000 PST | 4000000 | 4000.000 | 4.000000 | 2451619 | 953118844.000000
+ Wed Mar 15 01:14:05 2000 PST | 5000000 | 5000.000 | 5.000000 | 2451619 | 953111645.000000
+ Mon Feb 10 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450491 | 855624721.000000
+ Mon Feb 10 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450491 | 855624721.000000
+ Mon Feb 10 17:32:00 1997 PST | 0 | 0.000 | 0.000000 | 2450491 | 855624720.000000
+ Mon Feb 10 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450491 | 855624721.000000
+ Mon Feb 10 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450491 | 855624721.000000
+ Mon Feb 10 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450491 | 855624721.000000
+ Mon Feb 10 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450491 | 855624721.000000
+ Mon Feb 10 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450491 | 855624721.000000
+ Mon Feb 10 09:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450490 | 855595921.000000
+ Mon Feb 10 09:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450490 | 855595921.000000
+ Mon Feb 10 09:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450490 | 855595921.000000
+ Mon Feb 10 14:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450491 | 855613921.000000
+ Thu Jul 10 14:32:01 1997 PDT | 1000000 | 1000.000 | 1.000000 | 2450641 | 868570321.000000
+ Tue Jun 10 18:32:01 1997 PDT | 1000000 | 1000.000 | 1.000000 | 2450611 | 865992721.000000
+ Mon Feb 10 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450491 | 855624721.000000
+ Tue Feb 11 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450492 | 855711121.000000
+ Wed Feb 12 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450493 | 855797521.000000
+ Thu Feb 13 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450494 | 855883921.000000
+ Fri Feb 14 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450495 | 855970321.000000
+ Sat Feb 15 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450496 | 856056721.000000
+ Sun Feb 16 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450497 | 856143121.000000
+ Tue Feb 16 17:32:01 0097 PST BC | 1000000 | 1000.000 | 1.000000 | 1686043 | -65192682479.000000
+ Sat Feb 16 17:32:01 0097 PST | 1000000 | 1000.000 | 1.000000 | 1756537 | -59102000879.000000
+ Thu Feb 16 17:32:01 0597 PST | 1000000 | 1000.000 | 1.000000 | 1939158 | -43323546479.000000
+ Tue Feb 16 17:32:01 1097 PST | 1000000 | 1000.000 | 1.000000 | 2121779 | -27545092079.000000
+ Sat Feb 16 17:32:01 1697 PST | 1000000 | 1000.000 | 1.000000 | 2340925 | -8610877679.000000
+ Thu Feb 16 17:32:01 1797 PST | 1000000 | 1000.000 | 1.000000 | 2377449 | -5455204079.000000
+ Tue Feb 16 17:32:01 1897 PST | 1000000 | 1000.000 | 1.000000 | 2413973 | -2299530479.000000
+ Sun Feb 16 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450497 | 856143121.000000
+ Sat Feb 16 17:32:01 2097 PST | 1000000 | 1000.000 | 1.000000 | 2487022 | 4011903121.000000
+ Wed Feb 28 17:32:01 1996 PST | 1000000 | 1000.000 | 1.000000 | 2450143 | 825557521.000000
+ Thu Feb 29 17:32:01 1996 PST | 1000000 | 1000.000 | 1.000000 | 2450144 | 825643921.000000
+ Fri Mar 01 17:32:01 1996 PST | 1000000 | 1000.000 | 1.000000 | 2450145 | 825730321.000000
+ Mon Dec 30 17:32:01 1996 PST | 1000000 | 1000.000 | 1.000000 | 2450449 | 851995921.000000
+ Tue Dec 31 17:32:01 1996 PST | 1000000 | 1000.000 | 1.000000 | 2450450 | 852082321.000000
+ Wed Jan 01 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450451 | 852168721.000000
+ Fri Feb 28 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450509 | 857179921.000000
+ Sat Mar 01 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450510 | 857266321.000000
+ Tue Dec 30 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450814 | 883531921.000000
+ Wed Dec 31 17:32:01 1997 PST | 1000000 | 1000.000 | 1.000000 | 2450815 | 883618321.000000
+ Fri Dec 31 17:32:01 1999 PST | 1000000 | 1000.000 | 1.000000 | 2451545 | 946690321.000000
+ Sat Jan 01 17:32:01 2000 PST | 1000000 | 1000.000 | 1.000000 | 2451546 | 946776721.000000
+ Sun Dec 31 17:32:01 2000 PST | 1000000 | 1000.000 | 1.000000 | 2451911 | 978312721.000000
+ Mon Jan 01 17:32:01 2001 PST | 1000000 | 1000.000 | 1.000000 | 2451912 | 978399121.000000
+(66 rows)
+
-- value near upper bound uses special case in code
SELECT date_part('epoch', '294270-01-01 00:00:00+00'::timestamptz);
date_part
@@ -1196,6 +1275,19 @@ SELECT date_part('epoch', '294270-01-01 00:00:00+00'::timestamptz);
9224097091200
(1 row)
+SELECT extract(epoch from '294270-01-01 00:00:00+00'::timestamptz);
+ extract
+----------------------
+ 9224097091200.000000
+(1 row)
+
+-- another internal overflow test case
+SELECT extract(epoch from '5000-01-01 00:00:00+00'::timestamptz);
+ extract
+--------------------
+ 95617584000.000000
+(1 row)
+
-- TO_CHAR()
SELECT to_char(d1, 'DAY Day day DY Dy dy MONTH Month month RM MON Mon mon')
FROM TIMESTAMPTZ_TBL;
diff --git a/src/test/regress/expected/timetz.out b/src/test/regress/expected/timetz.out
index 0ae5b5204d..f4960c0166 100644
--- a/src/test/regress/expected/timetz.out
+++ b/src/test/regress/expected/timetz.out
@@ -148,33 +148,33 @@ HINT: No operator matches the given name and argument types. You might need to
-- test EXTRACT
--
SELECT EXTRACT(MICROSECOND FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
------------
- 25575401
+ extract
+----------
+ 25575401
(1 row)
SELECT EXTRACT(MILLISECOND FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
+ extract
-----------
25575.401
(1 row)
SELECT EXTRACT(SECOND FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
+ extract
-----------
25.575401
(1 row)
SELECT EXTRACT(MINUTE FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
------------
- 30
+ extract
+---------
+ 30
(1 row)
SELECT EXTRACT(HOUR FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
- date_part
------------
- 13
+ extract
+---------
+ 13
(1 row)
SELECT EXTRACT(DAY FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04'); -- error
@@ -182,24 +182,50 @@ ERROR: "time with time zone" units "day" not recognized
SELECT EXTRACT(FORTNIGHT FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04'); -- error
ERROR: "time with time zone" units "fortnight" not recognized
SELECT EXTRACT(TIMEZONE FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04:30');
+ extract
+---------
+ -16200
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_HOUR FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04:30');
+ extract
+---------
+ -4
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_MINUTE FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04:30');
+ extract
+---------
+ -30
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
+ extract
+--------------
+ 63025.575401
+(1 row)
+
+-- date_part implementation is mostly the same as extract, so only
+-- test a few cases for additional coverage.
+SELECT date_part('microsecond', TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
date_part
-----------
- -16200
+ 25575401
(1 row)
-SELECT EXTRACT(TIMEZONE_HOUR FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04:30');
+SELECT date_part('millisecond', TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
date_part
-----------
- -4
+ 25575.401
(1 row)
-SELECT EXTRACT(TIMEZONE_MINUTE FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04:30');
+SELECT date_part('second', TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
date_part
-----------
- -30
+ 25.575401
(1 row)
-SELECT EXTRACT(EPOCH FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
+SELECT date_part('epoch', TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
date_part
--------------
63025.575401
diff --git a/src/test/regress/sql/date.sql b/src/test/regress/sql/date.sql
index 1cdd6bdca1..8f7435b767 100644
--- a/src/test/regress/sql/date.sql
+++ b/src/test/regress/sql/date.sql
@@ -284,14 +284,18 @@ CREATE TABLE DATE_TBL (f1 date);
SELECT EXTRACT(DAY FROM DATE '2020-08-11');
SELECT EXTRACT(MONTH FROM DATE '2020-08-11');
SELECT EXTRACT(YEAR FROM DATE '2020-08-11');
+SELECT EXTRACT(YEAR FROM DATE '2020-08-11 BC');
SELECT EXTRACT(DECADE FROM DATE '2020-08-11');
SELECT EXTRACT(CENTURY FROM DATE '2020-08-11');
SELECT EXTRACT(MILLENNIUM FROM DATE '2020-08-11');
SELECT EXTRACT(ISOYEAR FROM DATE '2020-08-11');
+SELECT EXTRACT(ISOYEAR FROM DATE '2020-08-11 BC');
SELECT EXTRACT(QUARTER FROM DATE '2020-08-11');
SELECT EXTRACT(WEEK FROM DATE '2020-08-11');
SELECT EXTRACT(DOW FROM DATE '2020-08-11');
+SELECT EXTRACT(DOW FROM DATE '2020-08-16');
SELECT EXTRACT(ISODOW FROM DATE '2020-08-11');
+SELECT EXTRACT(ISODOW FROM DATE '2020-08-16');
SELECT EXTRACT(DOY FROM DATE '2020-08-11');
SELECT EXTRACT(TIMEZONE FROM DATE '2020-08-11');
SELECT EXTRACT(TIMEZONE_M FROM DATE '2020-08-11');
@@ -321,14 +325,9 @@ CREATE TABLE DATE_TBL (f1 date);
--
-- oscillating fields from non-finite date:
--
-SELECT EXTRACT(HOUR FROM DATE 'infinity'); -- NULL
-SELECT EXTRACT(HOUR FROM DATE '-infinity'); -- NULL
--- all possible fields
-SELECT EXTRACT(MICROSECONDS FROM DATE 'infinity'); -- NULL
-SELECT EXTRACT(MILLISECONDS FROM DATE 'infinity'); -- NULL
-SELECT EXTRACT(SECOND FROM DATE 'infinity'); -- NULL
-SELECT EXTRACT(MINUTE FROM DATE 'infinity'); -- NULL
-SELECT EXTRACT(HOUR FROM DATE 'infinity'); -- NULL
+SELECT EXTRACT(DAY FROM DATE 'infinity'); -- NULL
+SELECT EXTRACT(DAY FROM DATE '-infinity'); -- NULL
+-- all supported fields
SELECT EXTRACT(DAY FROM DATE 'infinity'); -- NULL
SELECT EXTRACT(MONTH FROM DATE 'infinity'); -- NULL
SELECT EXTRACT(QUARTER FROM DATE 'infinity'); -- NULL
@@ -336,15 +335,12 @@ CREATE TABLE DATE_TBL (f1 date);
SELECT EXTRACT(DOW FROM DATE 'infinity'); -- NULL
SELECT EXTRACT(ISODOW FROM DATE 'infinity'); -- NULL
SELECT EXTRACT(DOY FROM DATE 'infinity'); -- NULL
-SELECT EXTRACT(TIMEZONE FROM DATE 'infinity'); -- NULL
-SELECT EXTRACT(TIMEZONE_M FROM DATE 'infinity'); -- NULL
-SELECT EXTRACT(TIMEZONE_H FROM DATE 'infinity'); -- NULL
--
-- monotonic fields from non-finite date:
--
SELECT EXTRACT(EPOCH FROM DATE 'infinity'); -- Infinity
SELECT EXTRACT(EPOCH FROM DATE '-infinity'); -- -Infinity
--- all possible fields
+-- all supported fields
SELECT EXTRACT(YEAR FROM DATE 'infinity'); -- Infinity
SELECT EXTRACT(DECADE FROM DATE 'infinity'); -- Infinity
SELECT EXTRACT(CENTURY FROM DATE 'infinity'); -- Infinity
@@ -355,7 +351,7 @@ CREATE TABLE DATE_TBL (f1 date);
--
-- wrong fields from non-finite date:
--
-SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- ERROR: timestamp units "microsec" not recognized
+SELECT EXTRACT(MICROSEC FROM DATE 'infinity'); -- error
-- test constructors
select make_date(2013, 7, 15);
diff --git a/src/test/regress/sql/interval.sql b/src/test/regress/sql/interval.sql
index 11c1929bef..6d532398bd 100644
--- a/src/test/regress/sql/interval.sql
+++ b/src/test/regress/sql/interval.sql
@@ -343,3 +343,15 @@ CREATE TABLE INTERVAL_MULDIV_TBL (span interval);
SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');
SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');
SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y');
+
+-- date_part implementation is mostly the same as extract, so only
+-- test a few cases for additional coverage.
+SELECT f1,
+ date_part('microsecond', f1) AS microsecond,
+ date_part('millisecond', f1) AS millisecond,
+ date_part('second', f1) AS second,
+ date_part('epoch', f1) AS epoch
+ FROM INTERVAL_TBL;
+
+-- internal overflow test case
+SELECT extract(epoch from interval '1000000000 days');
diff --git a/src/test/regress/sql/time.sql b/src/test/regress/sql/time.sql
index 688bd6b75b..3637f28798 100644
--- a/src/test/regress/sql/time.sql
+++ b/src/test/regress/sql/time.sql
@@ -63,3 +63,10 @@ CREATE TABLE TIME_TBL (f1 time(2));
SELECT EXTRACT(FORTNIGHT FROM TIME '2020-05-26 13:30:25.575401'); -- error
SELECT EXTRACT(TIMEZONE FROM TIME '2020-05-26 13:30:25.575401'); -- error
SELECT EXTRACT(EPOCH FROM TIME '2020-05-26 13:30:25.575401');
+
+-- date_part implementation is mostly the same as extract, so only
+-- test a few cases for additional coverage.
+SELECT date_part('microsecond', TIME '2020-05-26 13:30:25.575401');
+SELECT date_part('millisecond', TIME '2020-05-26 13:30:25.575401');
+SELECT date_part('second', TIME '2020-05-26 13:30:25.575401');
+SELECT date_part('epoch', TIME '2020-05-26 13:30:25.575401');
diff --git a/src/test/regress/sql/timestamp.sql b/src/test/regress/sql/timestamp.sql
index ed03d7c218..c43a1f2268 100644
--- a/src/test/regress/sql/timestamp.sql
+++ b/src/test/regress/sql/timestamp.sql
@@ -261,8 +261,21 @@ CREATE TABLE TIMESTAMP_TBL (d1 timestamp(2) without time zone);
date_part( 'epoch', d1) AS epoch
FROM TIMESTAMP_TBL;
+-- extract implementation is mostly the same as date_part, so only
+-- test a few cases for additional coverage.
+SELECT d1 as "timestamp",
+ extract(microseconds from d1) AS microseconds,
+ extract(milliseconds from d1) AS milliseconds,
+ extract(seconds from d1) AS seconds,
+ round(extract(julian from d1)) AS julian,
+ extract(epoch from d1) AS epoch
+ FROM TIMESTAMP_TBL;
+
-- value near upper bound uses special case in code
SELECT date_part('epoch', '294270-01-01 00:00:00'::timestamp);
+SELECT extract(epoch from '294270-01-01 00:00:00'::timestamp);
+-- another internal overflow test case
+SELECT extract(epoch from '5000-01-01 00:00:00'::timestamp);
-- TO_CHAR()
SELECT to_char(d1, 'DAY Day day DY Dy dy MONTH Month month RM MON Mon mon')
diff --git a/src/test/regress/sql/timestamptz.sql b/src/test/regress/sql/timestamptz.sql
index fd1bf50a9c..17ced99efc 100644
--- a/src/test/regress/sql/timestamptz.sql
+++ b/src/test/regress/sql/timestamptz.sql
@@ -275,8 +275,21 @@ CREATE TABLE TIMESTAMPTZ_TBL (d1 timestamp(2) with time zone);
date_part( 'timezone_minute', d1) AS timezone_minute
FROM TIMESTAMPTZ_TBL;
+-- extract implementation is mostly the same as date_part, so only
+-- test a few cases for additional coverage.
+SELECT d1 as "timestamp",
+ extract(microseconds from d1) AS microseconds,
+ extract(milliseconds from d1) AS milliseconds,
+ extract(seconds from d1) AS seconds,
+ round(extract(julian from d1)) AS julian,
+ extract(epoch from d1) AS epoch
+ FROM TIMESTAMPTZ_TBL;
+
-- value near upper bound uses special case in code
SELECT date_part('epoch', '294270-01-01 00:00:00+00'::timestamptz);
+SELECT extract(epoch from '294270-01-01 00:00:00+00'::timestamptz);
+-- another internal overflow test case
+SELECT extract(epoch from '5000-01-01 00:00:00+00'::timestamptz);
-- TO_CHAR()
SELECT to_char(d1, 'DAY Day day DY Dy dy MONTH Month month RM MON Mon mon')
diff --git a/src/test/regress/sql/timetz.sql b/src/test/regress/sql/timetz.sql
index f39ea121bf..7b70f4656c 100644
--- a/src/test/regress/sql/timetz.sql
+++ b/src/test/regress/sql/timetz.sql
@@ -70,3 +70,10 @@ CREATE TABLE TIMETZ_TBL (f1 time(2) with time zone);
SELECT EXTRACT(TIMEZONE_HOUR FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04:30');
SELECT EXTRACT(TIMEZONE_MINUTE FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04:30');
SELECT EXTRACT(EPOCH FROM TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
+
+-- date_part implementation is mostly the same as extract, so only
+-- test a few cases for additional coverage.
+SELECT date_part('microsecond', TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
+SELECT date_part('millisecond', TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
+SELECT date_part('second', TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
+SELECT date_part('epoch', TIME WITH TIME ZONE '2020-05-26 13:30:25.575401-04');
base-commit: 91e7c903291116bd081abe7d4a058d40a2a06e16
--
2.31.0
On 01.04.21 20:49, Peter Eisentraut wrote:
also done
I also figured out a way to combine the float8 and numeric
implementations so that there is not so much duplication. Added tests
to cover all the edge and overflow cases.I think this is solid now.
The extract(julian from timestamp) is still a bit in the slow mode, but
as I previously stated, it's not documented and gives the wrong result,
so it's not clear whether it should be fixed and what it should do. I
think I'll register that part as an open item in any case, to see what
we should do about that.
committed and done
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
The extract(julian from timestamp) is still a bit in the slow mode, but
as I previously stated, it's not documented and gives the wrong result,
so it's not clear whether it should be fixed and what it should do. I
think I'll register that part as an open item in any case, to see what
we should do about that.
I looked into this issue. It's not quite true that the behavior is
entirely undocumented: Appendix B (datetime.sgml) says
In the Julian Date system, each day has a sequential number, starting
from JD 0 (which is sometimes called <emphasis>the</emphasis> Julian Date).
JD 0 corresponds to 1 January 4713 BC in the Julian calendar, or
24 November 4714 BC in the Gregorian calendar. Julian Date counting
is most often used by astronomers for labeling their nightly observations,
and therefore a date runs from noon UTC to the next noon UTC, rather than
from midnight to midnight: JD 0 designates the 24 hours from noon UTC on
24 November 4714 BC to noon UTC on 25 November 4714 BC.
</para>
<para>
Although <productname>PostgreSQL</productname> supports Julian Date notation for
input and output of dates (and also uses Julian dates for some internal
datetime calculations), it does not observe the nicety of having dates
run from noon to noon. <productname>PostgreSQL</productname> treats a Julian Date
as running from midnight to midnight.
</para>
That last bit requires clarification: we treat a Julian date as running
from *local* midnight to local midnight (ie in the active timezone, not
UTC midnight). So far as I can see, the behavior of extract(julian) is
consistent with that definition:
regression=# show timezone;
TimeZone
------------------
America/New_York
(1 row)
regression=# select date_part('julian', '2021-04-19 00:00:01-04'::timestamptz);
date_part
-------------------
2459324.000011574
(1 row)
regression=# select date_part('julian', '2021-04-19 23:59:00-04'::timestamptz);
date_part
--------------------
2459324.9993055556
(1 row)
regression=# select date_part('julian', '2021-04-19'::date);
date_part
-----------
2459324
(1 row)
I don't see that to_char's J mode differs from this, either.
So I don't think there's any code change required (unless you are still
worried about speed). What we do need is documentation fixes:
* clarify the above bit about local vs UTC midnight
* document the existence of the julian field for date_part/extract
* fix this bit in the to_char docs to agree with reality,
ie s/UTC/local time/:
<row>
<entry><literal>J</literal></entry>
<entry>Julian Day (integer days since November 24, 4714 BC at midnight UTC)</entry>
</row>
Perhaps it'd be worth documenting that you can get the standard
astronomical definition of Julian date by transposing to time zone UTC-12
before converting. But I think trying to change PG's behavior at this
point would be a bad idea.
(We could also consider back-patching these doc fixes.)
regards, tom lane
I wrote:
So I don't think there's any code change required (unless you are still
worried about speed). What we do need is documentation fixes:
* clarify the above bit about local vs UTC midnight
* document the existence of the julian field for date_part/extract
* fix this bit in the to_char docs to agree with reality,
ie s/UTC/local time/:
<entry>Julian Day (integer days since November 24, 4714 BC at midnight UTC)</entry>
Perhaps it'd be worth documenting that you can get the standard
astronomical definition of Julian date by transposing to time zone UTC-12
before converting. But I think trying to change PG's behavior at this
point would be a bad idea.
Here's a concrete documentation proposal covering this.
regards, tom lane
Attachments:
document-julian-dates-better-1.patchtext/x-diff; charset=us-ascii; name=document-julian-dates-better-1.patchDownload
diff --git a/doc/src/sgml/datetime.sgml b/doc/src/sgml/datetime.sgml
index 39fbc39cb0..c069281d1a 100644
--- a/doc/src/sgml/datetime.sgml
+++ b/doc/src/sgml/datetime.sgml
@@ -763,9 +763,6 @@
<indexterm zone="datetime-units-history">
<primary>Gregorian calendar</primary>
</indexterm>
- <indexterm zone="datetime-units-history">
- <primary>Julian date</primary>
- </indexterm>
<para>
The SQL standard states that <quote>Within the definition of a
@@ -868,6 +865,15 @@ $ <userinput>cal 9 1752</userinput>
festivals.
</para>
+ </sect1>
+
+ <sect1 id="datetime-julian-dates">
+ <title>Julian Dates</title>
+
+ <indexterm zone="datetime-julian-dates">
+ <primary>Julian date</primary>
+ </indexterm>
+
<para>
The <firstterm>Julian Date</firstterm> system is another type of
calendar, unrelated to the Julian calendar though it is confusingly
@@ -876,6 +882,9 @@ $ <userinput>cal 9 1752</userinput>
Joseph Justus Scaliger (1540–1609)
and probably takes its name from Scaliger's father,
the Italian scholar Julius Caesar Scaliger (1484–1558).
+ </para>
+
+ <para>
In the Julian Date system, each day has a sequential number, starting
from JD 0 (which is sometimes called <emphasis>the</emphasis> Julian Date).
JD 0 corresponds to 1 January 4713 BC in the Julian calendar, or
@@ -891,7 +900,26 @@ $ <userinput>cal 9 1752</userinput>
input and output of dates (and also uses Julian dates for some internal
datetime calculations), it does not observe the nicety of having dates
run from noon to noon. <productname>PostgreSQL</productname> treats a Julian Date
- as running from midnight to midnight.
+ as running from local midnight to local midnight, the same as a normal
+ date.
+ </para>
+
+ <para>
+ This definition does, however, provide a way to obtain the astronomical
+ definition when you need it: do the arithmetic in time
+ zone <literal>UTC-12</literal>. For example,
+<programlisting>
+=> SELECT extract(julian from '2021-06-23 7:00:00-04'::timestamptz at time zone 'UTC-12');
+ extract
+------------------------------
+ 2459389.95833333333333333333
+(1 row)
+=> SELECT extract(julian from '2021-06-23 8:00:00-04'::timestamptz at time zone 'UTC-12');
+ extract
+--------------------------------------
+ 2459390.0000000000000000000000000000
+(1 row)
+</programlisting>
</para>
</sect1>
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 5bba13973f..105555cfb4 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -7539,7 +7539,8 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}');
</row>
<row>
<entry><literal>J</literal></entry>
- <entry>Julian Day (integer days since November 24, 4714 BC at midnight UTC)</entry>
+ <entry>Julian Date (integer days since November 24, 4714 BC at local
+ midnight; see <xref linkend="datetime-julian-dates"/>)</entry>
</row>
<row>
<entry><literal>Q</literal></entry>
@@ -9609,6 +9610,25 @@ SELECT EXTRACT(ISOYEAR FROM DATE '2006-01-02');
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>julian</literal></term>
+ <listitem>
+ <para>
+ The <firstterm>Julian Date</firstterm> corresponding to the
+ date or timestamp (not applicable to intervals). Timestamps
+ that are not local midnight result in a fractional value. See
+ <xref linkend="datetime-julian-dates"/> for more information.
+ </para>
+
+<screen>
+SELECT EXTRACT(JULIAN FROM DATE '2006-01-01');
+<lineannotation>Result: </lineannotation><computeroutput>2453737</computeroutput>
+SELECT EXTRACT(JULIAN FROM TIMESTAMP '2006-01-01 12:00');
+<lineannotation>Result: </lineannotation><computeroutput>2453737.50000000000000000000</computeroutput>
+</screen>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>microseconds</literal></term>
<listitem>
I wrote:
Perhaps it'd be worth documenting that you can get the standard
astronomical definition of Julian date by transposing to time zone UTC-12
before converting.
BTW ... I'd first thought that the way to do this was to rotate to
time zone UTC+12. I convinced myself on two separate days that UTC-12
was correct instead, but now I'm thinking I was right the first time.
In particular, the results I'm getting with UTC-12 don't square with
the example on Wikipedia [1]https://en.wikipedia.org/wiki/Julian_day, which says "the Julian Date for
00:30:00.0 UT January 1, 2013, is 2 456 293.520 833":
regression=# select extract(julian from '2013-01-01 00:30+00'::timestamptz at time zone 'utc-12');
extract
------------------------------
2456294.52083333333333333333
(1 row)
But using UTC+12 does match:
regression=# select extract(julian from '2013-01-01 00:30+00'::timestamptz at time zone 'utc+12');
extract
------------------------------
2456293.52083333333333333333
(1 row)
Of course Wikipedia has been known to contain errors, but now
I'm inclined to think I blew this. Anyone want to check my work?
regards, tom lane
On Wed, Apr 28, 2021 at 3:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Of course Wikipedia has been known to contain errors, but now
I'm inclined to think I blew this. Anyone want to check my work?
I tried a couple of examples not from Wikipedia. First, from the
definition of Julian days as used by astronomers[1]http://curious.astro.cornell.edu/people-and-astronomy/125-observational-astronomy/timekeeping/calendars/763-how-was-the-starting-point-for-the-julian-date-system-chosen-advanced, counting from
noon on 4713-01-01 BC Julian AKA 4714-11-24 BC Gregorian, days 0 and 1
look right with 'utc+12':
postgres=# select extract(julian from '4714-11-24 11:00:00+00
BC'::timestamptz at time zone 'utc+12');
ERROR: timestamp out of range
postgres=# select extract(julian from '4714-11-24 12:00:00+00
BC'::timestamptz at time zone 'utc+12');
extract
--------------------------------
0.0000000000000000000000000000
(1 row)
postgres=# select extract(julian from '4714-11-25 11:00:00+00
BC'::timestamptz at time zone 'utc+12');
extract
------------------------
0.95833333333333333333
(1 row)
postgres=# select extract(julian from '4714-11-25 12:00:00+00
BC'::timestamptz at time zone 'utc+12');
extract
--------------------------------
1.0000000000000000000000000000
(1 row)
Next I found a worked example in an aerospace textbook[1]http://curious.astro.cornell.edu/people-and-astronomy/125-observational-astronomy/timekeeping/calendars/763-how-was-the-starting-point-for-the-julian-date-system-chosen-advanced and it agrees, too:
postgres=# select extract(julian from '2004-05-12
14:45:30+00'::timestamptz at time zone 'utc+12');
extract
------------------------------
2453138.11493055555555555556
(1 row)
[1]: http://curious.astro.cornell.edu/people-and-astronomy/125-observational-astronomy/timekeeping/calendars/763-how-was-the-starting-point-for-the-julian-date-system-chosen-advanced
[2]: https://www.sciencedirect.com/topics/engineering/julian-day-number
Thomas Munro <thomas.munro@gmail.com> writes:
On Wed, Apr 28, 2021 at 3:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Of course Wikipedia has been known to contain errors, but now
I'm inclined to think I blew this. Anyone want to check my work?
I tried a couple of examples not from Wikipedia. ...
Thanks for checking! I'll go adjust the documentation.
regards, tom lane
On Tue, Nov 03, 2020 at 07:22:14PM -0500, Tom Lane wrote:
I feel like this is committable at this point --- any objections?
(This became commit 40c24bf, "Improve our ability to regurgitate SQL-syntax
function calls.")
--- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -2369,11 +2369,12 @@ _equalFuncCall(const FuncCall *a, const FuncCall *b)
+ COMPARE_SCALAR_FIELD(funcformat);
equalfuncs.c has been using COMPARE_COERCIONFORM_FIELD() to ignore differences
in fields of this type. Does this spot have cause to depart from the pattern?
Noah Misch <noah@leadboat.com> writes:
equalfuncs.c has been using COMPARE_COERCIONFORM_FIELD() to ignore differences
in fields of this type. Does this spot have cause to depart from the pattern?
Oversight, I think. Will fix.
regards, tom lane
I wrote:
Noah Misch <noah@leadboat.com> writes:
equalfuncs.c has been using COMPARE_COERCIONFORM_FIELD() to ignore differences
in fields of this type. Does this spot have cause to depart from the pattern?
Oversight, I think. Will fix.
After looking closer, I see that there are a couple of very very minor
ways in which parse analysis changes behavior based on the value of
FuncCall.funcformat:
* transformRangeFunction won't apply the appropriate transformation to
a multiple-argument unnest() unless the format is COERCE_EXPLICIT_CALL.
(This is likely a no-op, though, as no grammar production that emits
COERCE_SQL_SYNTAX could apply to the function name "unnest".)
* ParseFuncOrColumn will not believe that a FuncCall could_be_projection
unless the format is COERCE_EXPLICIT_CALL. This is next door to a no-op,
since other restrictions such as nargs == 1 would usually suffice to
reject COERCE_SQL_SYNTAX calls, but maybe there are corner cases where
it'd matter.
So if you wanted to be picky you could claim that within FuncCall,
funcformat is semantically significant and thus that equalfuncs.c is
coded correctly. Nonetheless I'm inclined to think that it'd be better
to use COMPARE_COERCIONFORM_FIELD here. I'm quite sure I didn't make
the above analysis when I wrote the code; using COMPARE_SCALAR_FIELD
was just reflex.
We could make use of COMPARE_COERCIONFORM_FIELD 100% correct by removing
these two tests of the funcformat value, but on the whole I doubt that
would be better.
BTW, I'm not sure any of this matters anyway; do we ever use equal()
on raw parse trees, except for debug purposes?
Thoughts?
regards, tom lane
I wrote:
We could make use of COMPARE_COERCIONFORM_FIELD 100% correct by removing
these two tests of the funcformat value, but on the whole I doubt that
would be better.
On still closer inspection, that seems like it'd be fine. All of
the gram.y productions that emit COERCE_SQL_SYNTAX also produce
schema-qualified function names (via SystemFuncName); and it seems
hard to see a use-case where we'd not do that. This makes the two
checks I cited 100% redundant, because the conditions they are in
also insist on an unqualified function name. So let's just take them
out again, making it strictly OK to use COMPARE_COERCIONFORM_FIELD.
regards, tom lane
On Sun, Jun 06, 2021 at 03:10:07PM -0400, Tom Lane wrote:
I wrote:
We could make use of COMPARE_COERCIONFORM_FIELD 100% correct by removing
these two tests of the funcformat value, but on the whole I doubt that
would be better.On still closer inspection, that seems like it'd be fine. All of
the gram.y productions that emit COERCE_SQL_SYNTAX also produce
schema-qualified function names (via SystemFuncName); and it seems
hard to see a use-case where we'd not do that. This makes the two
checks I cited 100% redundant, because the conditions they are in
also insist on an unqualified function name. So let's just take them
out again, making it strictly OK to use COMPARE_COERCIONFORM_FIELD.
I have little intuition on this exact topic, but I have no particular concerns
about the change you pushed.