Since '2001-09-09 01:46:40'::timestamp microseconds are lost when extracting epoch
Hello,
Steps to reproduce:
select extract(epoch from '2001-09-09 01:46:39.999999'::timestamp)
returns 999999999.999999 as expected
while
select extract(epoch from '2001-09-09 01:46:40.000021'::timestamp)
returns 1000000000.00002 - 1 microsecond is truncated.
Obviously, it is due to the fact that extract epoch returns double
precision which in turn has 15 decimal digits precision.
While there is a pretty simple workaround in C, that returns
microseconds since Unix epoch:
Datum
to_microseconds(PG_FUNCTION_ARGS) {
Timestamp arg = PG_GETARG_TIMESTAMP(0)+946684800000000;
PG_RETURN_INT64(arg);
}
I was not able to find the other way of doing that (i.e. without C
function).
Petr Fedorov <petr.fedorov@phystech.edu> writes:
select extract(epoch from '2001-09-09 01:46:40.000021'::timestamp)
returns 1000000000.00002 - 1 microsecond is truncated.
Obviously, it is due to the fact that extract epoch returns double
precision which in turn has 15 decimal digits precision.
I can't get very excited about this. However, it might be worth
noting that v12 and HEAD print "1000000000.000021" as expected,
thanks to the Ryu float output code. You can get that from older
branches as well if you set extra_float_digits = 1.
By my arithmetic, IEEE float8 ought to be able to represent
microseconds accurately out to about 285 years either way from the
1970 epoch, so for practical purposes it'll be fine for a long time.
regards, tom lane
On Sat, Nov 30, 2019 at 10:28 PM Petr Fedorov <petr.fedorov@phystech.edu> wrote:
Obviously, it is due to the fact that extract epoch returns double
precision which in turn has 15 decimal digits precision.
I guess this deviation from the SQL standard ("exact numeric") made
sense when PostgreSQL used double for timestamps, but would break a
lot of queries if we changed it.
Thomas Munro <thomas.munro@gmail.com> writes:
On Sat, Nov 30, 2019 at 10:28 PM Petr Fedorov <petr.fedorov@phystech.edu> wrote:
Obviously, it is due to the fact that extract epoch returns double
precision which in turn has 15 decimal digits precision.
I guess this deviation from the SQL standard ("exact numeric") made
sense when PostgreSQL used double for timestamps, but would break a
lot of queries if we changed it.
Hmmm ... well, now that you mention it, would it really break things
if we made it return numeric? There's an implicit cast to float8,
so it seems like queries requiring that type would still work.
There might be a performance-related argument against switching,
perhaps.
regards, tom lane
It appears that extract epoch returns double precision, not float8. And
the program below seems to be demonstrating that there are enough
'floating-point numbers' as defined by IEEE-754 to represent
1000000000.000021 precisely enough:
#include <cmath>
#include <iostream>
#include <iomanip>
#include <limits>
int main() {
double from = 1000000000.000020;
std::cout << std::setprecision(56) << from << " (" << std::hexfloat <<
from << ") " << std::endl;
for(auto i = 0; i < 15; ++i) {
double to = std::nextafter( from, std::numeric_limits<double>::max());
std::cout << std::defaultfloat << to << std::hexfloat << " (" << to
<< ") " << std::endl;
from = to;
}
}
Outputs:
1000000000.00002002716064453125 (0x1.dcd65000000a8p+29)
1000000000.00002014636993408203125 (0x1.dcd65000000a9p+29)
1000000000.0000202655792236328125 (0x1.dcd65000000aap+29)
1000000000.00002038478851318359375 (0x1.dcd65000000abp+29)
1000000000.000020503997802734375 (0x1.dcd65000000acp+29)
1000000000.00002062320709228515625 (0x1.dcd65000000adp+29)
1000000000.0000207424163818359375 (0x1.dcd65000000aep+29)
1000000000.00002086162567138671875 (0x1.dcd65000000afp+29)
1000000000.0000209808349609375 (0x1.dcd65000000bp+29)
1000000000.00002110004425048828125 (0x1.dcd65000000b1p+29)
1000000000.0000212192535400390625 (0x1.dcd65000000b2p+29)
1000000000.00002133846282958984375 (0x1.dcd65000000b3p+29)
1000000000.000021457672119140625 (0x1.dcd65000000b4p+29)
1000000000.00002157688140869140625 (0x1.dcd65000000b5p+29)
1000000000.0000216960906982421875 (0x1.dcd65000000b6p+29)
1000000000.00002181529998779296875 (0x1.dcd65000000b7p+29)
I'm not an expert in floating point math but hopefully it means that no
type change is required - double precision can handle it.
And since it works correctly on v12 for this particular date may be all
what is needed it to verify that it works for the other dates too! For
example what was changed in v12 (comparing to 11.6 I use) so extract
epoch works correctly?
02.12.2019 01:59, Tom Lane пишет:
Show quoted text
Thomas Munro <thomas.munro@gmail.com> writes:
On Sat, Nov 30, 2019 at 10:28 PM Petr Fedorov <petr.fedorov@phystech.edu> wrote:
Obviously, it is due to the fact that extract epoch returns double
precision which in turn has 15 decimal digits precision.I guess this deviation from the SQL standard ("exact numeric") made
sense when PostgreSQL used double for timestamps, but would break a
lot of queries if we changed it.Hmmm ... well, now that you mention it, would it really break things
if we made it return numeric? There's an implicit cast to float8,
so it seems like queries requiring that type would still work.There might be a performance-related argument against switching,
perhaps.regards, tom lane
On Tue, Dec 3, 2019 at 12:08 AM Petr Fedorov <petr.fedorov@phystech.edu> wrote:
It appears that extract epoch returns double precision, not float8. And
the program below seems to be demonstrating that there are enough
'floating-point numbers' as defined by IEEE-754 to represent
1000000000.000021 precisely enough:
Double precision and float8 are different names for the same type in PostgreSQL.
I'm not an expert in floating point math but hopefully it means that no
type change is required - double precision can handle it.
Me neither, but the SQL standard requires us to use an exact numeric
type, so it's wrong on that level by definition.
It's also wrong because binary floating point numbers can't represent
0.000001 (one microsecond represented as seconds) exactly, and that's
our unit of counting for timestamps. You can get pretty far by
thinking of the decimal number you see on the screen as the true
number and the double as a fuzzy internal storage or transport that
does the job just fine due to the round trip conversion guarantee
provided by DBL_DIG, but the double is still going to have the wrong
value in some cases. As soon as you start doing any arithmetic or
comparisons with the double directly, interesting things can start to
happen to make the error visible and break things; for example
0.1::float8 + 0.2::float8 = 0.3::float8 is false.
And since it works correctly on v12 for this particular date may be all
what is needed it to verify that it works for the other dates too! For
example what was changed in v12 (comparing to 11.6 I use) so extract
epoch works correctly?
PostgreSQL 12 adopted a different algorithm[1]https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=02ddd499322ab6f2f0d58692955dc9633c2150fc for converting float8
to text that can affect how many digits are shown, as Tom explained.
The manual has some notes about it[2]https://www.postgresql.org/docs/12/datatype-numeric.html#DATATYPE-FLOAT.
[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=02ddd499322ab6f2f0d58692955dc9633c2150fc
[2]: https://www.postgresql.org/docs/12/datatype-numeric.html#DATATYPE-FLOAT
On 2019-12-02 23:52, Thomas Munro wrote:
I'm not an expert in floating point math but hopefully it means that no
type change is required - double precision can handle it.Me neither, but the SQL standard requires us to use an exact numeric
type, so it's wrong on that level by definition.
I looked into this (changing the return types of date_part()/extract()
from float8 to numeric).
One problem (other than perhaps performance, tbd.) is that this would no
longer allow processing infinite timestamps, since numeric does not
support infinity. It could be argued that running extract() on infinite
timestamps isn't very useful, but it's something to consider explicitly.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
One problem (other than perhaps performance, tbd.) is that this would no
longer allow processing infinite timestamps, since numeric does not
support infinity. It could be argued that running extract() on infinite
timestamps isn't very useful, but it's something to consider explicitly.
I wonder if it's time to fix that, ie introduce +-Infinity into numeric.c.
This isn't the first time we've seen issues with numeric not being a
superset of float, and it won't be the last.
At first glance there's no free bits in the on-disk format for numeric,
but we could do something by defining the low-order bits of the header
word for a NaN to distinguish between real NaN and +/- infinity.
It looks like those bits should reliably be zero right now.
regards, tom lane
On 5/25/20 3:28 PM, Peter Eisentraut wrote:
On 2019-12-02 23:52, Thomas Munro wrote:
I'm not an expert in floating point math but hopefully it means that no
type change is required - double precision can handle it.Me neither, but the SQL standard requires us to use an exact numeric
type, so it's wrong on that level by definition.I looked into this (changing the return types of date_part()/extract()
from float8 to numeric).
I think what would be better is to have a specific date_part function
for each part and have extract translate to the appropriate one. This
is particularly interesting for epoch but it would also allow us to
return the correct type mandated by the spec.
(I would also accept a specific date_part per return type instead of per
part, that would probably even be better.)
--
Vik Fearing
Vik Fearing <vik@postgresfriends.org> writes:
On 5/25/20 3:28 PM, Peter Eisentraut wrote:
I looked into this (changing the return types of date_part()/extract()
from float8 to numeric).
I think what would be better is to have a specific date_part function
for each part and have extract translate to the appropriate one.
Doesn't really work for upwards compatibility with existing views,
which will have calls to date_part(text, ...) embedded in them.
Actually, now that I think about it, changing the result type of
date_part() is likely to be problematic anyway for such cases.
It's not going to be good if pg_upgrade's dump/restore of a view
results in a new output column type; especially if it's a
materialized view.
So maybe what we'd have to do is leave date_part() alone for
legacy compatibility, and invent new functions that the extract()
syntax would now be translated to. While at it, maybe we could
fix things so that the syntax reverse-lists the same way instead
of injecting Postgres-isms...
regards, tom lane
On 5/25/20 6:40 PM, Tom Lane wrote:
Vik Fearing <vik@postgresfriends.org> writes:
On 5/25/20 3:28 PM, Peter Eisentraut wrote:
I looked into this (changing the return types of date_part()/extract()
from float8 to numeric).I think what would be better is to have a specific date_part function
for each part and have extract translate to the appropriate one.Doesn't really work for upwards compatibility with existing views,
which will have calls to date_part(text, ...) embedded in them.Actually, now that I think about it, changing the result type of
date_part() is likely to be problematic anyway for such cases.
It's not going to be good if pg_upgrade's dump/restore of a view
results in a new output column type; especially if it's a
materialized view.So maybe what we'd have to do is leave date_part() alone for
legacy compatibility, and invent new functions that the extract()
syntax would now be translated to.
I'm sorry, I wasn't clear. I was suggesting adding new functions while
also keeping the current generic function. So exactly what you say in
that last paragraph.
Although <extract expression> has a fixed list of constant parts,
date_part() allows the part to be variable. So we need to keep it
anyway for cases like this contrived example:
SELECT date_part(p, now())
FROM UNNEST(ARRAY['epoch', 'year', 'second']) AS u (p)
While at it, maybe we could
fix things so that the syntax reverse-lists the same way instead
of injecting Postgres-isms...
I'm not sure what this means.
--
Vik Fearing
Vik Fearing <vik@postgresfriends.org> writes:
On 5/25/20 6:40 PM, Tom Lane wrote:
While at it, maybe we could
fix things so that the syntax reverse-lists the same way instead
of injecting Postgres-isms...
I'm not sure what this means.
This:
regression=# create view myview as select extract(year from current_timestamp) as y;
CREATE VIEW
regression=# \d+ myview
...
View definition:
SELECT date_part('year'::text, CURRENT_TIMESTAMP) AS y;
What had been a 100% spec-compliant view definition is now quite
Postgres-specific. I fixed some similar problems in 0bb51aa96 (before
that, the CURRENT_TIMESTAMP part would've reverse-listed differently
too); but I didn't tackle EXTRACT(), SUBSTRING(), and other cases.
I'm not claiming that we really need to fix all of those. But if we are
going to pick nits about which data type EXTRACT() returns then I think
it's legit to worry about its reverse-list representation at the same
time ... especially if we must touch the grammar's translation anyway.
regards, tom lane
I wrote:
What had been a 100% spec-compliant view definition is now quite
Postgres-specific. I fixed some similar problems in 0bb51aa96 (before
that, the CURRENT_TIMESTAMP part would've reverse-listed differently
too); but I didn't tackle EXTRACT(), SUBSTRING(), and other cases.
I'm not claiming that we really need to fix all of those. But if we are
going to pick nits about which data type EXTRACT() returns then I think
it's legit to worry about its reverse-list representation at the same
time ... especially if we must touch the grammar's translation anyway.
BTW, shortly after sending that I had an idea about how to do it without
adding a boatload of new parsetree infrastructure, which has been the
main reason why nobody has wanted to tackle it. The obvious way to do
this is to make a new kind of expression node, but that cascades into
lots and lots of places (see 0bb51aa96, plus the later commits that
fixed oversights in it :-(). It's a lot of work for a mostly-cosmetic
issue.
However: suppose that we continue to translate these things into FuncExpr
nodes, the same as always, but we add a new CoercionForm variant, say
COERCE_SQL_SYNTAX. 99% of the system ignores FuncExpr.funcformat,
and would continue to do so, but ruleutils.c would take it to mean
that (1) the call should be reverse-listed as some special SQL syntax
and (2) the funcid is one of a small set of built-in functions for
which ruleutils.c knows what to emit. (If it doesn't recognize the
funcid, it could either throw an error, or fall back to normal display
of the node.) For cases such as EXTRACT, this would also represent
a promise that specific arguments are Const nodes from which the
desired keyword can be extracted.
This is kind of an abuse of "CoercionForm", since that typedef name
implies that it only talks about how to handle cast cases, but
semantically it's always been a how-to-display-function-calls thing.
We could either hold our noses about that or rename the typedef.
If we went this way then we could easily clean up most of the other
weird-SQL-syntax function call cases, incrementally over time,
without a lot of additional work.
regards, tom lane
On Mon, May 25, 2020 at 09:43:32AM -0400, Tom Lane wrote:
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
One problem (other than perhaps performance, tbd.) is that this would no
longer allow processing infinite timestamps, since numeric does not
support infinity. It could be argued that running extract() on infinite
timestamps isn't very useful, but it's something to consider explicitly.I wonder if it's time to fix that, ie introduce +-Infinity into numeric.c.
This isn't the first time we've seen issues with numeric not being a
superset of float, and it won't be the last.At first glance there's no free bits in the on-disk format for numeric,
but we could do something by defining the low-order bits of the header
word for a NaN to distinguish between real NaN and +/- infinity.
It looks like those bits should reliably be zero right now.
+1 for adding +/- infinity to NUMERIC.
Best,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778
Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
On 2020-05-25 15:28, Peter Eisentraut wrote:
On 2019-12-02 23:52, Thomas Munro wrote:
I'm not an expert in floating point math but hopefully it means that no
type change is required - double precision can handle it.Me neither, but the SQL standard requires us to use an exact numeric
type, so it's wrong on that level by definition.I looked into this (changing the return types of date_part()/extract()
from float8 to numeric).One problem (other than perhaps performance, tbd.) is that this would no
longer allow processing infinite timestamps, since numeric does not
support infinity. It could be argued that running extract() on infinite
timestamps isn't very useful, but it's something to consider explicitly.
Now that numeric supports infinity, here is a patch that changes the
return types of date_part() to numeric. It's not meant to be a final
version, but it is useful for discussing a few things.
The internal implementation could be made a bit more elegant if we had
variants of int4_numeric() and int8_numeric() that don't have to go
through fmgr. This would also help in other areas of the code. There
are probably also other ways in which the internals could be made more
compact; I just converted them fairly directly.
When extracting seconds or microseconds, I made it always produce 6 or 3
decimal places, even if they are zero. I don't know if we want that or
what behavior we want. That's what all the changes in the regression
tests are about. Everything else passes unchanged.
The 'julian' field is a bit of a mystery. First of all it's not
documented. The regression tests only test the rounded output, perhaps
to avoid floating point differences. When you do date_part('julian',
date), then you get a correct Julian Day. But date_part('julian',
timestamp[tz]) gives incorrect Julian Date values that are off by 12
hours. My patch doesn't change that, I just noticed when I took away
the round() call in the regression tests. Those calls now produce a
different number of decimal places.
It might make sense to make date_part(..., date) a separate C function
instead of an SQL wrapper around date_part(..., timestamp). That could
return integer and could reject nonsensical fields such as "minute".
Then we could also make a less contorted implementation of
date_part('julian', date) that matches to_char(date, 'J') and remove the
incorrect implementation of date_part('julian', timestamp).
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachments:
v1-0001-Change-return-type-of-EXTRACT-to-numeric.patchtext/plain; charset=UTF-8; name=v1-0001-Change-return-type-of-EXTRACT-to-numeric.patch; x-mac-creator=0; x-mac-type=0Download+619-421
út 4. 8. 2020 v 16:08 odesílatel Peter Eisentraut <
peter.eisentraut@2ndquadrant.com> napsal:
On 2020-05-25 15:28, Peter Eisentraut wrote:
On 2019-12-02 23:52, Thomas Munro wrote:
I'm not an expert in floating point math but hopefully it means that no
type change is required - double precision can handle it.Me neither, but the SQL standard requires us to use an exact numeric
type, so it's wrong on that level by definition.I looked into this (changing the return types of date_part()/extract()
from float8 to numeric).One problem (other than perhaps performance, tbd.) is that this would no
longer allow processing infinite timestamps, since numeric does not
support infinity. It could be argued that running extract() on infinite
timestamps isn't very useful, but it's something to consider explicitly.Now that numeric supports infinity, here is a patch that changes the
return types of date_part() to numeric. It's not meant to be a final
version, but it is useful for discussing a few things.The internal implementation could be made a bit more elegant if we had
variants of int4_numeric() and int8_numeric() that don't have to go
through fmgr. This would also help in other areas of the code. There
are probably also other ways in which the internals could be made more
compact; I just converted them fairly directly.When extracting seconds or microseconds, I made it always produce 6 or 3
decimal places, even if they are zero. I don't know if we want that or
what behavior we want. That's what all the changes in the regression
tests are about. Everything else passes unchanged.The 'julian' field is a bit of a mystery. First of all it's not
documented. The regression tests only test the rounded output, perhaps
to avoid floating point differences. When you do date_part('julian',
date), then you get a correct Julian Day. But date_part('julian',
timestamp[tz]) gives incorrect Julian Date values that are off by 12
hours. My patch doesn't change that, I just noticed when I took away
the round() call in the regression tests. Those calls now produce a
different number of decimal places.It might make sense to make date_part(..., date) a separate C function
instead of an SQL wrapper around date_part(..., timestamp). That could
return integer and could reject nonsensical fields such as "minute".
Then we could also make a less contorted implementation of
date_part('julian', date) that matches to_char(date, 'J') and remove the
incorrect implementation of date_part('julian', timestamp).
I like a idea to have d date variant of date_part
Pavel
Show quoted text
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Here is a new patch series version.
I have created a new internal function for converting integers to
numeric, to make the implementation a bit more elegant and compact.
I have also created a new date_part(..., date) in C, and added more test
coverage for that.
Other than some of the semantic issues mentioned in the previous
message, this version looks pretty good to me in principle.
I have done some performance tests to assess the impact of changing from
float to numeric. I did tests like this:
create table t1 (a int, b timestamp with time zone);
insert into t1 select generate_series(1, 10000000), current_timestamp +
random() * interval '1000 days';
select extract(dow from b) from t1 \g /dev/null
select extract(epoch from b) from t1 \g /dev/null
There appears to be about a 20% increase in run time for these tests.
These are obviously extreme tests, so I think that would be okay. More
tests and testing ideas are welcome.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachments:
v2-0001-Add-more-tests-for-extract-of-date-type.patchtext/plain; charset=UTF-8; name=v2-0001-Add-more-tests-for-extract-of-date-type.patch; x-mac-creator=0; x-mac-type=0Download+228-16
v2-0002-Expose-internal-function-for-converting-int64-to-.patchtext/plain; charset=UTF-8; name=v2-0002-Expose-internal-function-for-converting-int64-to-.patch; x-mac-creator=0; x-mac-type=0Download+43-123
v2-0003-Change-return-type-of-EXTRACT-to-numeric.patchtext/plain; charset=UTF-8; name=v2-0003-Change-return-type-of-EXTRACT-to-numeric.patch; x-mac-creator=0; x-mac-type=0Download+766-482
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
Here is a new patch series version.
I have created a new internal function for converting integers to
numeric, to make the implementation a bit more elegant and compact.
I reviewed the 0002 patch, finding one bug (in int8_sum) and a few
more calls of int8_numeric that could be converted. I think the
attached updated version is committable, and I'd recommend going
ahead with that regardless of the rest of this. I hadn't realized
how many random calls of int8_numeric and int4_numeric we'd grown,
but there are a lot, so this is nice cleanup.
I continue to think that we can't commit 0003 in this form, because
of the breakage that will ensure in stored views. As I said upthread,
we should leave the existing SQL-exposed functions alone, invent
new ones that return numeric, and alter the parser to translate
EXTRACT constructs to the new functions. This approach would also
provide an "out" for anyone who does complain about the performance
cost --- they can just continue to use the old functions.
regards, tom lane
Attachments:
v3-0002-expose-int64_to_numeric.patchtext/x-diff; charset=us-ascii; name=v3-0002-expose-int64_to_numeric.patchDownload+50-132
po 7. 9. 2020 v 1:46 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
Here is a new patch series version.
I have created a new internal function for converting integers to
numeric, to make the implementation a bit more elegant and compact.I reviewed the 0002 patch, finding one bug (in int8_sum) and a few
more calls of int8_numeric that could be converted. I think the
attached updated version is committable, and I'd recommend going
ahead with that regardless of the rest of this. I hadn't realized
how many random calls of int8_numeric and int4_numeric we'd grown,
but there are a lot, so this is nice cleanup.
This patch is a clean win.
+1
I continue to think that we can't commit 0003 in this form, because
of the breakage that will ensure in stored views. As I said upthread,
we should leave the existing SQL-exposed functions alone, invent
new ones that return numeric, and alter the parser to translate
EXTRACT constructs to the new functions. This approach would also
provide an "out" for anyone who does complain about the performance
cost --- they can just continue to use the old functions.
+1
Regards
Pavel
Show quoted text
regards, tom lane
On 2020-09-07 01:46, Tom Lane wrote:
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
Here is a new patch series version.
I have created a new internal function for converting integers to
numeric, to make the implementation a bit more elegant and compact.I reviewed the 0002 patch, finding one bug (in int8_sum)
Ouch, no test coverage. Should we perhaps remove this function, since
it's obsolete and unused?
and a few
more calls of int8_numeric that could be converted. I think the
attached updated version is committable, and I'd recommend going
ahead with that regardless of the rest of this. I hadn't realized
how many random calls of int8_numeric and int4_numeric we'd grown,
but there are a lot, so this is nice cleanup.
Yes, please go ahead with it.
I continue to think that we can't commit 0003 in this form, because
of the breakage that will ensure in stored views. As I said upthread,
we should leave the existing SQL-exposed functions alone, invent
new ones that return numeric, and alter the parser to translate
EXTRACT constructs to the new functions. This approach would also
provide an "out" for anyone who does complain about the performance
cost --- they can just continue to use the old functions.
Okay, I will continue looking into this.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services