Re: Re: [PATCHES] [PATCH] Contrib C source for casting MONEY to INT[248] and FLOAT[48]

Started by ÀîÁ¢ÐÂover 24 years ago24 messages
#1ÀîÁ¢ÐÂ
lilixin@cqu.edu.cn

Bruce Momjian:
I am a begineer,The question is PgSQL support the full entrity integrity
and refernece integerity.For example.does it support "Restricted Delete锟斤拷NULLIFIES-delete,default-delete....",I read your book,But can not find detail.Where to find?

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

lilixin@cqu.edu.cn

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

锟斤拷
锟斤拷
锟斤拷锟斤拷锟斤拷 lilixin@cqu.edu.cn

#2Thalis A. Kalfigopoulos
thalis@cs.pitt.edu
In reply to: ÀîÁ¢Ð (#1)
Re: Re: [PATCHES] [PATCH] Contrib C source for casting MONEY to INT[248] and FLOAT[48]

I'm guessing you are asking for support of referential indegrity constraints. It exists in Bruce's book under http://www.ca.postgresql.org/docs/aw_pgsql_book/node131.html (ON DELETE NO ACTION/SET NULL/SET DEFAULT)

cheers,
thalis

On Wed, 20 Jun 2001, [ISO-8859-1] ������������������ wrote:

Show quoted text

Bruce Momjian:
I am a begineer,The question is PgSQL support the full entrity integrity
and refernece integerity.For example.does it support "Restricted Delete������NULLIFIES-delete,default-delete....",I read your book,But can not find detail.Where to find?

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

lilixin@cqu.edu.cn

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

������
������
������������������ lilixin@cqu.edu.cn

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

#3Naomi Walker
nwalker@eldocomp.com
In reply to: ÀîÁ¢Ð (#1)
2 gig file size limit

If PostgreSQL is run on a system that has a file size limit (2 gig?), where
might cause us to hit the limit?
--
Naomi Walker
Chief Information Officer
Eldorado Computing, Inc.
602-604-3100 ext 242

#4Larry Rosenman
ler@lerctr.org
In reply to: Naomi Walker (#3)
Re: [HACKERS] 2 gig file size limit

* Naomi Walker <nwalker@eldocomp.com> [010706 17:57]:

If PostgreSQL is run on a system that has a file size limit (2 gig?), where
might cause us to hit the limit?

PostgreSQL is smart, and breaks the table files up at ~1GB per each,
so it's transparent to you.

You shouldn't have to worry about it.
LER

--
Naomi Walker
Chief Information Officer
Eldorado Computing, Inc.
602-604-3100 ext 242

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: ler@lerctr.org
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749

#5Lamar Owen
lamar.owen@wgcr.org
In reply to: Naomi Walker (#3)
Re: [HACKERS] 2 gig file size limit

On Friday 06 July 2001 18:51, Naomi Walker wrote:

If PostgreSQL is run on a system that has a file size limit (2 gig?), where
might cause us to hit the limit?

Since PostgreSQL automatically segments its internal data files to get around
such limits, the only place you will hit this limit will be when making
backups using pg_dump or pg_dumpall. You may need to pipe the output of
those commands into a file splitting utility, and then you'll have to pipe
through a reassembly utility to restore.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

#6Joseph Shraibman
jks@selectacast.net
In reply to: Naomi Walker (#3)
Re: Backups WAS: 2 gig file size limit

Lamar Owen wrote:

On Friday 06 July 2001 18:51, Naomi Walker wrote:

If PostgreSQL is run on a system that has a file size limit (2 gig?), where
might cause us to hit the limit?

Since PostgreSQL automatically segments its internal data files to get around
such limits, the only place you will hit this limit will be when making
backups using pg_dump or pg_dumpall. You may need to pipe the output of

Speaking of which.

Doing a dumpall for a backup is taking a long time, the a restore from
the dump files doesn't leave the database in its original state. Could
a command be added that locks all the files, quickly tars them up, then
releases the lock?

--
Joseph Shraibman
jks@selectacast.net
Increase signal to noise ratio. http://www.targabot.com

#7Doug McNaught
doug@wireboard.com
In reply to: Naomi Walker (#3)
Re: Re: Backups WAS: 2 gig file size limit

[HACKERS removed from CC: list]

Joseph Shraibman <jks@selectacast.net> writes:

Doing a dumpall for a backup is taking a long time, the a restore from
the dump files doesn't leave the database in its original state. Could
a command be added that locks all the files, quickly tars them up, then
releases the lock?

As I understand it, pg_dump runs inside a transaction, so the output
reflects a consistent snapshot of the database as of the time the dump
starts (thanks to MVCC); restoring will put the database back to where
it was at the start of the dump.

Have you observed otherwise?

-Doug
--
The rain man gave me two cures; he said jump right in,
The first was Texas medicine--the second was just railroad gin,
And like a fool I mixed them, and it strangled up my mind,
Now people just get uglier, and I got no sense of time... --Dylan

#8Joseph Shraibman
jks@selectacast.net
In reply to: Naomi Walker (#3)
Re: Re: Backups WAS: 2 gig file size limit

Doug McNaught wrote:

[HACKERS removed from CC: list]

Joseph Shraibman <jks@selectacast.net> writes:

Doing a dumpall for a backup is taking a long time, the a restore from
the dump files doesn't leave the database in its original state. Could
a command be added that locks all the files, quickly tars them up, then
releases the lock?

As I understand it, pg_dump runs inside a transaction, so the output
reflects a consistent snapshot of the database as of the time the dump
starts (thanks to MVCC); restoring will put the database back to where
it was at the start of the dump.

In theory.

Have you observed otherwise?

Yes. Specifically timestamps are dumped in a way that (1) they lose
percision (2) sometimes have 60 in the seconds field which prevents the
dump from being restored.

And I suspect any statistics generated by VACUUM ANALYZE are lost.

If a database got corrupted somehow in order to restore from the dump
the database would have to delete the original database then restore
from the dump. Untarring would be much easier (especially as the
database grows). Obviously this won't replace dumps but for quick
backups it would be great.

--
Joseph Shraibman
jks@selectacast.net
Increase signal to noise ratio. http://www.targabot.com

#9Mike Castle
dalgoda@ix.netcom.com
In reply to: Joseph Shraibman (#8)
Re: Re: Backups WAS: 2 gig file size limit

On Mon, Jul 09, 2001 at 08:59:59PM -0400, Joseph Shraibman wrote:

If a database got corrupted somehow in order to restore from the dump
the database would have to delete the original database then restore
from the dump. Untarring would be much easier (especially as the

You could always shut the system down and tar on your own.

Of course, tarring up several gigabytes is going to take a while.

Better to fix the dump/restore process than to hack in a work around that
has very limited benefit.

mrc
--
Mike Castle dalgoda@ix.netcom.com www.netcom.com/~dalgoda/
We are all of us living in the shadow of Manhattan. -- Watchmen
fatal ("You are in a maze of twisty compiler features, all different"); -- gcc

#10Tom Lane
tgl@sss.pgh.pa.us
In reply to: Joseph Shraibman (#8)
Re: Re: Backups WAS: 2 gig file size limit

Joseph Shraibman <jks@selectacast.net> writes:

Could a command be added that locks all the files, quickly tars them
up, then releases the lock?

pg_ctl stop
tar cfz - $PGDATA >someplace
pg_ctl start

There is no possibility of anything less drastic, if you want to ensure
that the database files are consistent and not changing. Don't even
think about doing a partial dump of the $PGDATA tree, either. If you
don't have a pg_log that matches your data files, you've got nothing.

regards, tom lane

#11Thomas Lockhart
lockhart@fourpalms.org
In reply to: Naomi Walker (#3)
Re: Re: Backups WAS: 2 gig file size limit

Have you observed otherwise?

Yes. Specifically timestamps are dumped in a way that (1) they lose
percision (2) sometimes have 60 in the seconds field which prevents the
dump from being restored.

The loss of precision for timestamp data stems from conservative
attempts to get consistant behavior from the data type. It is certainly
not entirely successful, but changes would have to solve some of these
problems without introducing more.

I've only seen the "60 seconds problem" with earlier Mandrake distros
which combined normal compiler optimizations with a "fast math"
optimization, against the apparent advice of the gcc developers. What
kind of system are you on, and how did you build PostgreSQL?

Regards.

- Thomas

#12Joseph Shraibman
jks@selectacast.net
In reply to: Naomi Walker (#3)
Re: Re: Backups WAS: 2 gig file size limit

Tom Lane wrote:

Joseph Shraibman <jks@selectacast.net> writes:

Could a command be added that locks all the files, quickly tars them
up, then releases the lock?

pg_ctl stop
tar cfz - $PGDATA >someplace
pg_ctl start

But that would mean I would have to have all my programs detect that the
database went down and make new connections. I would rather that
postgres just lock all the files and do the tar.

--
Joseph Shraibman
jks@selectacast.net
Increase signal to noise ratio. http://www.targabot.com

#13Joseph Shraibman
jks@selectacast.net
In reply to: Naomi Walker (#3)
Re: Re: Backups WAS: 2 gig file size limit

I mentioned this on general a while ago.

I had the problem when I dumped my 7.0.3 db to upgrade to 7.1. I had to
modify the dump because there were some 60 seconds in there. It was
obvious in the code in backend/utils/adt/datetime that it was using
sprintf to do the formatting, and sprintf was taking the the float the
represented the seconds and rounding it.

select '2001-07-10 15:39:59.999'::timestamp;
?column?
---------------------------
2001-07-10 15:39:60.00-04
(1 row)

Thomas Lockhart wrote:

Have you observed otherwise?

Yes. Specifically timestamps are dumped in a way that (1) they lose
percision (2) sometimes have 60 in the seconds field which prevents the
dump from being restored.

The loss of precision for timestamp data stems from conservative
attempts to get consistant behavior from the data type. It is certainly
not entirely successful, but changes would have to solve some of these
problems without introducing more.

I've only seen the "60 seconds problem" with earlier Mandrake distros
which combined normal compiler optimizations with a "fast math"
optimization, against the apparent advice of the gcc developers. What
kind of system are you on, and how did you build PostgreSQL?

Regards.

- Thomas

--
Joseph Shraibman
jks@selectacast.net
Increase signal to noise ratio. http://www.targabot.com

#14Neil Conway
nconway@klamath.dyndns.org
In reply to: Naomi Walker (#3)
Re: 2 gig file size limit

(This question was answered several days ago on this list; please check
the list archives before posting. I believe it's also in the FAQ.)

If PostgreSQL is run on a system that has a file size limit (2
gig?), where might cause us to hit the limit?

Postgres will never internally use files (e.g. for tables, indexes,
etc) larger than 1GB -- at that point, the file is split.

However, you might run into problems when you export the data from Pg
to another source, such as if you pg_dump the contents of a database >
2GB. In that case, filter pg_dump through gzip or bzip2 to reduce the
size of the dump. If that's still not enough, you can dump individual
tables (with -t) or use 'split' to divide the dump into several files.

Cheers,

Neil

#15Bruce Momjian
pgman@candle.pha.pa.us
In reply to: Neil Conway (#14)
Re: 2 gig file size limit

(This question was answered several days ago on this list; please check
the list archives before posting. I believe it's also in the FAQ.)

If PostgreSQL is run on a system that has a file size limit (2
gig?), where might cause us to hit the limit?

Postgres will never internally use files (e.g. for tables, indexes,
etc) larger than 1GB -- at that point, the file is split.

However, you might run into problems when you export the data from Pg
to another source, such as if you pg_dump the contents of a database >
2GB. In that case, filter pg_dump through gzip or bzip2 to reduce the
size of the dump. If that's still not enough, you can dump individual
tables (with -t) or use 'split' to divide the dump into several files.

I just added the second part of this sentense to the FAQ to try and make
it more visible:

The maximum table size of 16TB does not require large file
support from the operating system. Large tables are stored as
multiple 1GB files so file system size limits are not important.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#16Thomas Lockhart
lockhart@alumni.caltech.edu
In reply to: Naomi Walker (#3)
Re: Re: Backups WAS: 2 gig file size limit

I mentioned this on general a while ago.

I'm not usually there/here, but subscribed recently to avoid annoying
bounce messages from replies to messages cross posted to -hackers. I may
not stay long, since the volume is hard to keep up with.

I had the problem when I dumped my 7.0.3 db to upgrade to 7.1. I had to
modify the dump because there were some 60 seconds in there. It was
obvious in the code in backend/utils/adt/datetime that it was using
sprintf to do the formatting, and sprintf was taking the the float the
represented the seconds and rounding it.

select '2001-07-10 15:39:59.999'::timestamp;
?column?
---------------------------
2001-07-10 15:39:60.00-04
(1 row)

Ah, right. I remember that now. Will continue to look at it...

- Thomas

#17Noname
markMLl.pgsql-general@telemetry.co.uk
In reply to: Bruce Momjian (#15)
Re: 2 gig file size limit

Can a single database be split over multiple filesystems, or does the
filesystem size under e.g. Linux (whatever it is these days) constrain
the database size?

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or
colleagues]

#18Noname
markMLl.pgsql-general@telemetry.co.uk
In reply to: Bruce Momjian (#15)
Re: 2 gig file size limit

Ian Willis wrote:

Postgresql transparently breaks the db into 1G chunks.

Yes, but presumably these are still in the directory tree that was
created by initdb, i.e. normally on a single filesystem.

The main concern is during dumps. A 10G db can't be dumped if the
filesustem has a 2G limit.

Which is why somebody suggested piping into tar or whatever.

Linus no longer has a filesystem file size limit ( or at least on
that you'll hit easily)

I'm not concerned with "easily". Telling one of our customers that we
chose a particular server becuase they won't easily hit limits is a
non-starter.

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or
colleagues]

#19Martijn van Oosterhout
kleptog@svana.org
In reply to: Noname (#18)
Re: 2 gig file size limit

On Wed, Jul 11, 2001 at 12:06:05PM +0000, markMLl.pgsql-general@telemetry.co.uk wrote:

Linus no longer has a filesystem file size limit ( or at least on
that you'll hit easily)

I'm not concerned with "easily". Telling one of our customers that we
chose a particular server becuase they won't easily hit limits is a
non-starter.

Many people would have great difficulty hitting 4 terabytes.

What the limit on NT?
--
Martijn van Oosterhout <kleptog@svana.org>
http://svana.org/kleptog/

Show quoted text

It would be nice if someone came up with a certification system that
actually separated those who can barely regurgitate what they crammed over
the last few weeks from those who command secret ninja networking powers.

#20Tony Grant
tony@animaproductions.com
In reply to: Martijn van Oosterhout (#19)
JDBC and stored procedures

Hello,

I am trying to use a stored procedure via JDBC. The objective is to be
able to get data from more than one table. My procedure is a simple get
country name from table countries where contry code = $1 copied from
Bruces book.

Ultradev is giving me "Error calling GetProcedures: An unidentified
error has occured"

Just thought I would ask here first if I am up against a brick wall?

Cheers

Tony Grant

--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html

#21Dave Cramer
Dave@micro-automation.net
In reply to: Tony Grant (#20)
Re: [JDBC] JDBC and stored procedures

Tony,

The GetProcedures function in the driver does not work.
You should be able to a simple select of the stored proc however

Dave

Show quoted text

On July 11, 2001 09:06 am, Tony Grant wrote:

Hello,

I am trying to use a stored procedure via JDBC. The objective is to be
able to get data from more than one table. My procedure is a simple get
country name from table countries where contry code = $1 copied from
Bruces book.

Ultradev is giving me "Error calling GetProcedures: An unidentified
error has occured"

Just thought I would ask here first if I am up against a brick wall?

Cheers

Tony Grant

--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

#22Tony Grant
tony@animaproductions.com
In reply to: Dave Cramer (#21)
Re: [JDBC] JDBC and stored procedures

On 11 Jul 2001 10:20:29 -0400, Dave Cramer wrote:

The GetProcedures function in the driver does not work.

OK. I bet it is on the todo list =:-D

You should be able to a simple select of the stored proc however

Yes! thank you very much!!!

SELECT getcountryname(director.country)

did the trick where getcountryname is the function (or stored procedure)

Cheers

Tony

--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html

#23Dave Cramer
Dave@micro-automation.net
In reply to: Tony Grant (#22)
RE: JDBC and stored procedures

The getProcedures api is on the todo list, but I don't think it returns
stored procs.

Dave

-----Original Message-----
From: pgsql-jdbc-owner@postgresql.org
[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Tony Grant
Sent: July 11, 2001 11:16 AM
To: Dave@micro-automation.net
Cc: pgsql-jdbc@PostgreSQL.org; pgsql-general@PostgreSQL.org
Subject: Re: [JDBC] JDBC and stored procedures

On 11 Jul 2001 10:20:29 -0400, Dave Cramer wrote:

The GetProcedures function in the driver does not work.

OK. I bet it is on the todo list =:-D

You should be able to a simple select of the stored proc however

Yes! thank you very much!!!

SELECT getcountryname(director.country)

did the trick where getcountryname is the function (or stored procedure)

Cheers

Tony

--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html

#24Noname
markMLl.pgsql-general@telemetry.co.uk
In reply to: Bruce Momjian (#15)
Re: 2 gig file size limit

Martijn van Oosterhout wrote:

What the limit on NT?

I'm told 2^64 bytes. Frankly, I'd be surprised if MS has tested it :-)

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or
colleagues]