COPY optimization issue

Started by Terry Fielderalmost 24 years ago2 messagesdocs
Jump to latest
#1Terry Fielder
terry@greatgulfhomes.com

My postgres database does a nightly sync with a Legacy database by
clobbering a postgres table with data from a CSV.

I currently just use the COPY command, but with over 800,000 records, this
takes quite some time.

Is there a faster way?

eg I notice that any validation failure of ANY records causes the entire
copy to roll back. Is this begin/commit action wrapped around the copy
costing me CPU cycles? And if so, can I turn it off, or is there a better
way then using copy?

Note: I do nightly vacuum's so deleted tuples is not the issue, I don't
think.

Thanks

Terry Fielder
Network Engineer
Great Gulf Homes / Ashton Woods Homes
terry@greatgulfhomes.com

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Terry Fielder (#1)
Re: COPY optimization issue

terry@greatgulfhomes.com writes:

I currently just use the COPY command, but with over 800,000 records, this
takes quite some time.
Is there a faster way?

Drop and recreate indexes might help. See
http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/populate.html

regards, tom lane