That dump-comments-on-composite-type-columns patch...
... doesn't work.
$ pg_dump regression >zzz.out
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: "complex" is a composite type
pg_dump: The command was: COPY public.complex (r, i) TO stdout;
$
I suspect it had more subtle problems too, because dumpTableComments
would have attached the comments to the dumpid associated with the
TableInfo entry, which isn't the object that will get dumped. So it
seems moderately likely that there would have been a potential for
misordering of the output.
I think it's probably a fundamentally bad idea to be putting composite
types into pg_dump's TableInfo array, because they just really aren't
tables at all. If you want to try again, I'd suggest writing a variant
of dumpTableComment that takes a TypeInfo and the attribute-names query
data obtained by dumpCompositeType.
regards, tom lane
$ pg_dump regression >zzz.out
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: "complex" is a composite type
pg_dump: The command was: COPY public.complex (r, i) TO stdout;
$
That could be fixed by just checking the relkind when dumping table
data, but hey.
I suspect it had more subtle problems too, because dumpTableComments
would have attached the comments to the dumpid associated with the
TableInfo entry, which isn't the object that will get dumped. So it
seems moderately likely that there would have been a potential for
misordering of the output.
Ok.
I think it's probably a fundamentally bad idea to be putting composite
types into pg_dump's TableInfo array, because they just really aren't
tables at all. If you want to try again, I'd suggest writing a variant
of dumpTableComment that takes a TypeInfo and the attribute-names query
data obtained by dumpCompositeType.
You mean unlike views, sequences and all other kinds of junk? :)
OK, I can do this, but I don't think I'll have time for the first beta.
Chris
ps. Did you back out the moving of owner to commands as well?
Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:
OK, I can do this, but I don't think I'll have time for the first beta.
No problem.
ps. Did you back out the moving of owner to commands as well?
No, just the composite-type thing.
regards, tom lane
Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:
OK, I can do this, but I don't think I'll have time for the first beta.
No problem.
Another interesting think I noticed in pg_dump is dumping of LOBs. It
seems to declare a cursor that fetches the blobs and then issues a fetch
1000 to get the first 1000 lobs. It never seems to execute any further
fetches. Am I right that if you have more than 1000 lobs in postgres,
pg_dump won't dump them?
Chris
Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:
Another interesting think I noticed in pg_dump is dumping of LOBs. It
seems to declare a cursor that fetches the blobs and then issues a fetch
1000 to get the first 1000 lobs. It never seems to execute any further
fetches.
Huh? That's inside a do-loop.
regards, tom lane
Another interesting think I noticed in pg_dump is dumping of LOBs. It
seems to declare a cursor that fetches the blobs and then issues a fetch
1000 to get the first 1000 lobs. It never seems to execute any further
fetches.Huh? That's inside a do-loop.
Errr, yeah. Don't know what made me not notice that :/
Christopher Kings-Lynne wrote:
Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:
OK, I can do this, but I don't think I'll have time for the first beta.
No problem.
Another interesting think I noticed in pg_dump is dumping of LOBs. It
seems to declare a cursor that fetches the blobs and then issues a fetch
1000 to get the first 1000 lobs. It never seems to execute any further
fetches. Am I right that if you have more than 1000 lobs in postgres,
pg_dump won't dump them?
I checked and though the Fetch string is issued outside the loop, it
keeps getting used inside the loop.
There was some strange capitalization that confused things and I cleaned
that up.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073