TIMESTAMP('now') y2k bug
Your name : Dan Linderman
Your email address : linderdm@ironhilltech.com
System Configuration
---------------------
Architecture (example: Intel Pentium) : Intel Pentium III 500mhz
Operating System (example: Linux 2.0.26 ELF) : Redhat LINUX 6.0
PostgreSQL version (example: PostgreSQL-6.5.3): PostgreSQL-6.5.3
Compiler used (example: gcc 2.8.0) : ?
Please enter a FULL description of your problem:
------------------------------------------------
When I insert a timestamp using the TIMESTAMP('now') into a date field,
"1999-12-31 19:00:00-05" is what is inserted into the field. This has been
done on multiple machines with PostgreSQL-6.5.3, and has been used when
inserting into a timestamp field, and a datetime field.
Please describe a way to repeat the problem. Please try to provide a
concise reproducible example, if at all possible:
----------------------------------------------------------------------
Create a table with a date field. Insert TIMESTAMP('now') into that field.
Select that row to see bug.
Dan Linderman
Web Programmer
Iron Hill Technology
linderdm@ironhilltech.com
302-369-9750 x23
www.ironhilltech.com
From bouncefilter Thu Jan 6 15:59:35 2000
Received: from datmail03.dat.com ([209.241.199.3])
by hub.org (8.9.3/8.9.3) with ESMTP id PAA02554;
Thu, 6 Jan 2000 15:59:01 -0500 (EST)
(envelope-from philip.culberson@dat.com)
Received: by datmail03.dat.com with Internet Mail Service (5.5.2448.0)
id <XF5BYQ32>; Thu, 6 Jan 2000 12:58:23 -0800
Message-ID: <A95EFC3B707BD311986C00A0C9E95B6A04B3C4@datmail03.dat.com>
From: "Culberson, Philip" <philip.culberson@dat.com>
To: "'Bruce Momjian'" <pgman@candle.pha.pa.us>, Dustin Sallings
<dustin@spy.net>
Cc: The Hermit Hacker <scrappy@hub.org>, pgsql-general@hub.org
Subject: RE: [GENERAL] Benchmarks
Date: Thu, 6 Jan 2000 12:58:21 -0800
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.5.2448.0)
Content-Type: text/plain;
charset="iso-8859-1"
In his very insightful post last week, Mike Mascari pointed out that, on
tables with heavy insert/updates, it was much faster to drop the index,
vacuum analyze, and then rebuild the index. Maybe in vacuum there is a
specific inefficiency in what Mike coined "defragment"ing indexes.
[Snip]
8. Not running VACUUM - PostgreSQL won't use indexes, or won't optimize
correctly unless the record count and dispersion estimates are up-to-date.
People have reported problems with running vacuum while under heavy load. We
haven't seen it, but we run vacuum each night at 4:05 a.m. However, if you
perform a LARGE number of INSERTS/UPDATES, it is better for you to do the
following:
DROP INDEX index_on_heavilty_used_table;
VACUUM ANALYZE;
CREATE INDEX index_on_heavily_used_table;
Because VACUUM will sit there, and, row by row, essentially "defragment"
your indexes, which can take damn near forever for any number of updates or
deletes greater than, say, 30,000 rows.
[Snip]
-----Original Message-----
From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]
Sent: Thursday, January 06, 2000 10:14 AM
To: Dustin Sallings
Cc: The Hermit Hacker; pgsql-general@hub.org
Subject: Re: [GENERAL] Benchmarks
Untrue, vacuum is *extremely* important for updating statistics.
If you have a lot of data in a table, and you have never vacuumed, you
might as well not have any indices. It'd be nice if you could seperate
the stat update from the storage reclaim. Actually, it'd be nice if you
could reuse storage, so that an actual vacuum wouldn't be necessary unless
you just wanted to free up disk space you might end up using again anyway.The vacuum also doesn't seem to be very efficient. In one of my
databases, a vacuum could take in excess of 24 hours, while I've written a
small SQL script that does a select rename and a insert into select from
that will do the same job in about ten minutes. This is a database that
cannot lock for more than a few minutes.
This is serious. Why would an INSERT / RENAME be so much faster. Are
we that bad with VACUUM?
--
Bruce Momjian | http://www.op.net/~candle
maillist@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
************
From bouncefilter Fri Jan 7 04:38:42 2000
Received: from zmail6.easynet.fr (email.easynet.fr [195.114.64.207])
by hub.org (8.9.3/8.9.3) with SMTP id EAA91726
for <pgsql-general@hub.org>; Fri, 7 Jan 2000 04:37:45 -0500 (EST)
(envelope-from tesio@easynet.fr)
Received: (qmail 14143 invoked from network); 7 Jan 2000 09:37:44 -0000
Received: from zmail4.easynet.fr (195.114.64.5)
by email.easynet.fr with QMTP; 7 Jan 2000 09:37:44 -0000
Received: (qmail 5404 invoked from network); 7 Jan 2000 02:31:03 -0000
Received: from mailgate2.easynet.fr (192.168.1.3)
by zmailserver.easynet.fr with QMQP; 7 Jan 2000 02:31:03 -0000
Received: from pop-nice-201.pops.easynet.fr (HELO atesio) (195.114.95.201)
by mrelay2.easynet.fr with SMTP; 7 Jan 2000 02:30:52 -0000
Message-ID: <006c01bf58b7$7e57b580$c95f72c3@atesio>
From: "Alain TESIO" <tesio@easynet.fr>
To: <pgsql-general@hub.org>
Cc: <Guillaume.Rousse@univ-reunion.fr>
References: <A95EFC3B707BD311986C00A0C9E95B6A04B3C4@datmail03.dat.com>
Subject: Re: MySQL / PostgreSQL (was: Postgres object orientation)
Date: Fri, 7 Jan 2000 03:32:42 +0100
MIME-Version: 1.0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 5.00.2014.211
X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2014.211
Subsidiary question : why is mySQL excluded from RDBMS
comparison on postgress www site ?
Maybe because it's much faster ;)
Sure, MySQL doesn't support transactions, rollbacks, ...
Maybe this question will sound a bit naive, but why doesn't
we have the choice to send queries to PostgreSQL as
transactional or not ? It's probably not meaningful to
say that a single query is transactional or not, but what
about a global parameter at the server level ? Forgive me
again for the naivety of this question, this may mean to
have two completely different engines. And it would have
been already done if possible ...
I've compared both engines and MySQL is much faster.
However I'll need transaction to ensure reliability
for the database updates. I've thought at using PostgreSQL
for updates, and MySQL for select, the database being
dumped from PostgreSQL and reloaded into MySQL every
night. Probably with specific queries and scripts rather
than a dump to get a MySQL-compliant dump file.
Has anyone an experience about a similar solution ?
Alain
From bouncefilter Thu Jan 6 21:42:38 2000
Received: from bob.west.spy.net (bob.west.spy.net [170.1.69.195])
by hub.org (8.9.3/8.9.3) with ESMTP id VAA88463
for <pgsql-general@hub.org>; Thu, 6 Jan 2000 21:42:27 -0500 (EST)
(envelope-from dustin@spy.net)
Received: from bleu.west.spy.net (bleu.west.spy.net [192.168.1.4])
by bob.west.spy.net (Postfix) with SMTP
id 293584848; Thu, 6 Jan 2000 18:43:12 -0800 (PST)
Date: Thu, 6 Jan 2000 18:42:12 -0800 (PST)
From: Dustin Sallings <dustin@spy.net>
X-Sender: dustin@bleu.west.spy.net
To: "Culberson, Philip" <philip.culberson@dat.com>
Cc: pgsql-general@hub.org
Subject: RE: [GENERAL] Benchmarks
In-Reply-To: <A95EFC3B707BD311986C00A0C9E95B6A04B3C4@datmail03.dat.com>
Message-ID: <Pine.SGI.3.95.1000106184141.27840A-100000@bleu.west.spy.net>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
On Thu, 6 Jan 2000, Culberson, Philip wrote:
This is a considerable amount faster. I never thought about the
indices getting hit here. Thanks a lot.
# In his very insightful post last week, Mike Mascari pointed out that, on
# tables with heavy insert/updates, it was much faster to drop the index,
# vacuum analyze, and then rebuild the index. Maybe in vacuum there is a
# specific inefficiency in what Mike coined "defragment"ing indexes.
#
# [Snip]
#
# 8. Not running VACUUM - PostgreSQL won't use indexes, or won't optimize
# correctly unless the record count and dispersion estimates are up-to-date.
# People have reported problems with running vacuum while under heavy load. We
# haven't seen it, but we run vacuum each night at 4:05 a.m. However, if you
# perform a LARGE number of INSERTS/UPDATES, it is better for you to do the
# following:
#
# DROP INDEX index_on_heavilty_used_table;
# VACUUM ANALYZE;
# CREATE INDEX index_on_heavily_used_table;
#
# Because VACUUM will sit there, and, row by row, essentially "defragment"
# your indexes, which can take damn near forever for any number of updates or
# deletes greater than, say, 30,000 rows.
#
# [Snip]
#
# -----Original Message-----
# From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]
# Sent: Thursday, January 06, 2000 10:14 AM
# To: Dustin Sallings
# Cc: The Hermit Hacker; pgsql-general@hub.org
# Subject: Re: [GENERAL] Benchmarks
#
#
# > Untrue, vacuum is *extremely* important for updating statistics.
# > If you have a lot of data in a table, and you have never vacuumed, you
# > might as well not have any indices. It'd be nice if you could seperate
# > the stat update from the storage reclaim. Actually, it'd be nice if you
# > could reuse storage, so that an actual vacuum wouldn't be necessary unless
# > you just wanted to free up disk space you might end up using again anyway.
# >
# > The vacuum also doesn't seem to be very efficient. In one of my
# > databases, a vacuum could take in excess of 24 hours, while I've written a
# > small SQL script that does a select rename and a insert into select from
# > that will do the same job in about ten minutes. This is a database that
# > cannot lock for more than a few minutes.
#
# This is serious. Why would an INSERT / RENAME be so much faster. Are
# we that bad with VACUUM?
#
# --
# Bruce Momjian | http://www.op.net/~candle
# maillist@candle.pha.pa.us | (610) 853-3000
# + If your life is a hard drive, | 830 Blythe Avenue
# + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
#
# ************
#
#
--
SA, beyond.com My girlfriend asked me which one I like better.
pub 1024/3CAE01D5 1994/11/03 Dustin Sallings <dustin@spy.net>
| Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE
L_______________________ I hope the answer won't upset her. ____________