Problem migrating dump to latest CVS snapshot.
We have pgsql-7.0.2 running on a production platform doing nightly dumps. I
tried to import this dump using psql to a pgsql server running from a cvs
update of 7.1 I did today.
All the data was imported OK except for the data in one table where I got
the following message on import :
ERROR: copy: line 154391, Bad timestamp external representation '2000-10-24 15:14:60.00+02'
PQendcopy: resetting connection
The result was that this table turned up with no rows at all after the
import when it should have contained more than 900000 rows.
I get exactly the same error trying to import into a 7.0.2 database on
my laptop as well, so I guess the problem might have been around for a
while.
The production platform creating the dump file is Solaris 7 on an Ultra
Sparc, while the laptop I'm importing the file on is Redhat Linux 6.1 on an
x86 processor.
The actual table "access_log" got these columns :
a_accesstime timestamp 8
a_locid int4 4
a_catid int4 4
a_searchterm varchar 256
a_host varchar 64
a_requesturl varchar 128
a_action varchar 16
a_uid int4 4
a_pt_id int4 4
Do anybody have suggestions to where I should look for the error or what
other data I need to supply to help somebody look into it ?
If you look at the seconds part of the time above you notice 60, which make
me wonder how that could get in there in the first place.
So to me there seems to bugs, it is possible to get invalid times into the
database and dump/restore breaks if you manage this.
regards,
Gunnar
Gunnar R|nning <gunnar@candleweb.no> wrote:
All the data was imported OK except for the data in one table where I got
the following message on import :ERROR: copy: line 154391, Bad timestamp external representation '2000-10-24 15:14:60.00+02'
PQendcopy: resetting connection
Looks like the "ISO" datestyle to me.
I get exactly the same error trying to import into a 7.0.2 database on my
laptop as well, so I guess the problem might have been around for a while.
You'll need to set the default PGDATESTYLE to ISO prior to importing.
(I don't recall what the proper way to do this is, but it's definitely
documented).
HTH,
Ray
--
Where do you want to go today?
Confutatis maledictis, flammis acribus addictis.
Gunnar R|nning <gunnar@candleweb.no> writes:
ERROR: copy: line 154391, Bad timestamp external representation '2000-10-24 15:14:60.00+02'
I'll venture it doesn't like the "60" for seconds.
The production platform creating the dump file is Solaris 7 on an Ultra
Sparc, while the laptop I'm importing the file on is Redhat Linux 6.1 on an
x86 processor.
Seems Mandrake Linux is not the only platform where roundoff behavior is
less IEEE-perfect than Thomas would like it to be. Perhaps we need a
slightly more robust approach to controlling roundoff error.
regards, tom lane
Gunnar R|nning <gunnar@candleweb.no> writes:
ERROR: copy: line 154391, Bad timestamp external representation '2000-10-24 15:14:60.00+02'
BTW, did your original data contain any fractional-second timestamps?
I'm wondering if the original value might have been something like
2000-10-24 15:14:59.999
in which case sprintf's roundoff of the seconds field to %.2f format
would've been enough to do the damage.
regards, tom lane
Seems Mandrake Linux is not the only platform where roundoff behavior is
less IEEE-perfect than Thomas would like it to be. Perhaps we need a
slightly more robust approach to controlling roundoff error.
Go ahead. istm that asking modulo, trunc, etc to Do The Right Thing is
not a big deal, and it would be better to understand how to build
executables that can do math.
Certainly better than writing a bunch of extra checking code to work
around the inability of a compiler (or compiler options) to do IEEE
math. It *is* a standard, ya know ;)
- Thomas
Tom Lane <tgl@sss.pgh.pa.us> writes:
Gunnar R|nning <gunnar@candleweb.no> writes:
ERROR: copy: line 154391, Bad timestamp external representation '2000-10-24 15:14:60.00+02'
BTW, did your original data contain any fractional-second timestamps?
I'm wondering if the original value might have been something like
2000-10-24 15:14:59.999
in which case sprintf's roundoff of the seconds field to %.2f format
would've been enough to do the damage.
What do you mean by original value ? The value we have in the production
database ? If so, that shows up as 2000-10-24 15:14:60.00+02 independent of
what platform my client is running on. The production platform was as I
mentioned Solaris 2.7.
The value was generated at the time of a given web request by a Java
servlet and inserted into the database using JDBC. The timestamp in Java is
the number of milliseconds since epoch, so yes it is quite probable that it
contained a fractional second timestamp ;-)
But the problem here then might be with the Solaris 2.7 platform and not
Redhat Linux 6.1 if I am interpreting this right ???
Regards,
Gunnar
Import Notes
Reply to msg id not found: TomLanesmessageofThu22Mar2001182404-0500
Gunnar R|nning <gunnar@candleweb.no> writes:
BTW, did your original data contain any fractional-second timestamps?
I'm wondering if the original value might have been something like
2000-10-24 15:14:59.999
in which case sprintf's roundoff of the seconds field to %.2f format
would've been enough to do the damage.
What do you mean by original value ? The value we have in the production
database ? If so, that shows up as 2000-10-24 15:14:60.00+02 independent of
what platform my client is running on. The production platform was as I
mentioned Solaris 2.7.
If you still have the value stored in the original database, please try
select date_part('seconds', ...)
to see what that reports as the true seconds part of the value.
regards, tom lane
Tom Lane <tgl@sss.pgh.pa.us> writes:
If you still have the value stored in the original database, please try
select date_part('seconds', ...)
to see what that reports as the true seconds part of the value.
Seems you hit the nail with your theory :
sf-ng=# select date_part('seconds', a_accesstime) from access_log where
a_accesstime > '2000-10-24 15:14:59' limit 1;
date_part
-----------
59.997
(1 row)
sf-ng=#
regards,
Gunnar
Import Notes
Reply to msg id not found: TomLanesmessageofThu22Mar2001204148-0500
Gunnar R|nning <gunnar@candleweb.no> writes:
Seems you hit the nail with your theory :
sf-ng=# select date_part('seconds', a_accesstime) from access_log where
a_accesstime > '2000-10-24 15:14:59' limit 1;
date_part
-----------
59.997
(1 row)
Ah-hah. And we print that with a "%.2f" sort of format, so it rounds
off to 60.00. Even in IEEE arithmetic ;-)
I've suggested before that timestamp output should round the timestamp
value to two fractional digits *before* breaking it down into year/
month/etc. Seems like this is a perfect example of the necessity
for that. Thomas, what say you?
regards, tom lane
(moved to -hackers, since I don't have posting privileges on -general)
I've suggested before that timestamp output should round the timestamp
value to two fractional digits *before* breaking it down into year/
month/etc. Seems like this is a perfect example of the necessity
for that. Thomas, what say you?
Well, that is a good idea to solve the "hidden digits problem",
introducing instead a new "lost digits feature". But I've been hoping to
hear a suggestion on how to allow a variable number of digits, without
cluttering things up with output values ending up with a bunch of 9's at
the end.
When I first implemented the non-Unix-time date/time types, I was
worried that the floating point math libraries on *some* of the two
dozen platforms we support would tend to print out .9999... values
(having seen this behavior *way* too often on older Unix boxes). But
I've never actually asked folks to run tests, since I was just happy
that the floating point implementation worked well enough to go into
production.
Thoughts?
- Thomas
Thomas Lockhart <lockhart@alumni.caltech.edu> writes:
I've suggested before that timestamp output should round the timestamp
value to two fractional digits *before* breaking it down into year/
month/etc. Seems like this is a perfect example of the necessity
for that. Thomas, what say you?
Well, that is a good idea to solve the "hidden digits problem",
introducing instead a new "lost digits feature".
We already have the "lost digits feature", since you cannot get
timestamp_out to display anything after the second digit. Now,
if you want to improve on that ...
But I've been hoping to
hear a suggestion on how to allow a variable number of digits, without
cluttering things up with output values ending up with a bunch of 9's at
the end.
Well, we could print the seconds part with, say, %.6f format and then
manually delete trailing zeroes (and the trailing dot if we find all the
fractional digits are zero, which would be a nice improvement anyway).
I'd still think it a good idea to round to the intended number of digits
before we break down the date, however.
The real question here is how far away from the Epoch do you wish to
produce reasonable display of fractional seconds? We have 6-digit
accuracy out to around 200 years before and after Y2K, which is probably
far enough, though maybe we should make it 5 digits to allow some
more margin for error.
regards, tom lane