Happy Anniversary

Started by Peter Eisentrautover 24 years ago7 messages
#1Peter Eisentraut
peter_e@gmx.net

I suppose few people have remembered that today is what could be
considered the 5th anniversary of the PostgreSQL project. Cheers for
another five years!

http://www.ca.postgresql.org/mhonarc/pgsql-hackers/1999-10/msg00552.html

--
Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter

#2Bruce Momjian
pgman@candle.pha.pa.us
In reply to: Peter Eisentraut (#1)
Re: Happy Anniversary

I suppose few people have remembered that today is what could be
considered the 5th anniversary of the PostgreSQL project. Cheers for
another five years!

http://www.ca.postgresql.org/mhonarc/pgsql-hackers/1999-10/msg00552.html

Good catch! Yes, you are right.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#3Naomi Walker
nwalker@eldocomp.com
In reply to: Peter Eisentraut (#1)
Postgresql bulk fast loader

Does postgresql have any sort of fast bulk loader?
--
Naomi Walker
Chief Information Officer
Eldorado Computing, Inc.
602-604-3100 ext 242

#4mlw
markw@mohawksoft.com
In reply to: Naomi Walker (#3)
Re: Postgresql bulk fast loader

Naomi Walker wrote:

Does postgresql have any sort of fast bulk loader?

It has a very cool SQL extension called COPY. Super fast.

Command: COPY
Description: Copies data between files and tables
Syntax:
COPY [ BINARY ] table [ WITH OIDS ]
FROM { 'filename' | stdin }
[ [USING] DELIMITERS 'delimiter' ]
[ WITH NULL AS 'null string' ]
COPY [ BINARY ] table [ WITH OIDS ]
TO { 'filename' | stdout }
[ [USING] DELIMITERS 'delimiter' ]
[ WITH NULL AS 'null string' ]

#5Bruce Momjian
pgman@candle.pha.pa.us
In reply to: Naomi Walker (#3)
Re: Postgresql bulk fast loader

Does postgresql have any sort of fast bulk loader?

COPY command.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#6Mark Volpe
volpe.mark@epa.gov
In reply to: Naomi Walker (#3)
Re: Postgresql bulk fast loader

Avoid doing this with indexes on the table, though. I learned the hard way!

Mark

mlw wrote:

Show quoted text

Naomi Walker wrote:

Does postgresql have any sort of fast bulk loader?

It has a very cool SQL extension called COPY. Super fast.

Command: COPY
Description: Copies data between files and tables
Syntax:
COPY [ BINARY ] table [ WITH OIDS ]
FROM { 'filename' | stdin }
[ [USING] DELIMITERS 'delimiter' ]
[ WITH NULL AS 'null string' ]
COPY [ BINARY ] table [ WITH OIDS ]
TO { 'filename' | stdout }
[ [USING] DELIMITERS 'delimiter' ]
[ WITH NULL AS 'null string' ]

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

#7Guy Fraser
guy@incentre.net
In reply to: Naomi Walker (#3)
Re: Re: Postgresql bulk fast loader

Mark Volpe wrote:

Avoid doing this with indexes on the table, though. I learned the hard way!

Mark

mlw wrote:

Naomi Walker wrote:

Does postgresql have any sort of fast bulk loader?

It has a very cool SQL extension called COPY. Super fast.

Command: COPY
Description: Copies data between files and tables
Syntax:
COPY [ BINARY ] table [ WITH OIDS ]
FROM { 'filename' | stdin }
[ [USING] DELIMITERS 'delimiter' ]
[ WITH NULL AS 'null string' ]
COPY [ BINARY ] table [ WITH OIDS ]
TO { 'filename' | stdout }
[ [USING] DELIMITERS 'delimiter' ]
[ WITH NULL AS 'null string' ]

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

Hi

On a daily basis I have an automated procedure that that bulk copies
information into a "holding" table. I scan for duplicates and put the
OID for the first unique record into a temporary table. Using the OID
and other information I do an INSERT with SELECT to move the unique
data into its appropriate table. Then I remove the unique records and
move the duplicates into a debugging table. After that I remove the
remaining records and drop the temporary tables. Once this is done I
vacuum the tables and regenerate the indexes.

This sounds complicated but by doing things in quick simple transactions
the database is able to run continuously without disruption. I am able
to import 30+ MB of data every day with only a small disruption when
updating the the summary tables.

Guy Fraser

--
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.