new set of psql patches for loading (saving) data from (to) text, binary files

Started by Pavel Stehuleover 9 years ago18 messageshackers
Jump to latest
#1Pavel Stehule
pavel.stehule@gmail.com

Hi

I am sending set of patches - for simpler testing these patches are
independent in this moment.

These patches are replacement of my previous patches in this area: COPY RAW
and fileref variables.

1. parametrized queries support - the psql variables can be passed as query
parameters

2. \gstore, \gbstore - save returned (binary) value to file

3. \set_from_file. \set_from_bfile - set a variable from (binary) file

The code is simple - there are not any change in critical or complex parts
of psql.

Regards

Pavel

Comments, notes?

Attachments:

psql-gstore-01.patchtext/x-patch; charset=US-ASCII; name=psql-gstore-01.patchDownload+153-8
psql-paramatrized_queries-01.patchtext/x-patch; charset=US-ASCII; name=psql-paramatrized_queries-01.patchDownload+172-17
psql-set-from-file-01.patchtext/x-patch; charset=US-ASCII; name=psql-set-from-file-01.patchDownload+209-2
#2Jason O'Donnell
odonnelljp01@gmail.com
In reply to: Pavel Stehule (#1)
Re: new set of psql patches for loading (saving) data from (to) text, binary files

The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: not tested
Documentation: tested, failed

Pavel,

gstore/gbstore:

The functionality worked as expected - one row, one column results of queries can be sent to a file or shell. It would be nice if a test case was included that proves results more than one row, one column wide will fail.

The documentation included is awkward to read. How about:

"Sends the current query input buffer to the server and stores
the result to an output file specified in the query or pipes the output
to a shell command. The file or command are written to only if the query
successfully returns exactly one, non-null row and column. If the
query fails or does not return data, an error is raised. "

Parameterized Queries:

The functionality proposed works as expected. Throughout the documentation, code and test cases the word "Parameterized" is spelled incorrectly: "PARAMETRIZED_QUERIES"

set_from_file/set_from_bfile:

The functionality proposed worked fine, I was able to set variables in sql from files. Minor typo in the documentation:
"The content is escapeaed as bytea value."

Hope this helps!

Jason O'Donnell
Crunchy Data

The new status of this patch is: Waiting on Author

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Pavel Stehule
pavel.stehule@gmail.com
In reply to: Jason O'Donnell (#2)
Re: Re: new set of psql patches for loading (saving) data from (to) text, binary files

Hi

Thank you for review

2017-01-09 17:24 GMT+01:00 Jason O'Donnell <odonnelljp01@gmail.com>:

The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: not tested
Documentation: tested, failed

Pavel,

gstore/gbstore:

The functionality worked as expected - one row, one column results of
queries can be sent to a file or shell. It would be nice if a test case
was included that proves results more than one row, one column wide will
fail.

fixed

The documentation included is awkward to read. How about:

"Sends the current query input buffer to the server and stores
the result to an output file specified in the query or pipes the output
to a shell command. The file or command are written to only if the query
successfully returns exactly one, non-null row and column. If the
query fails or does not return data, an error is raised. "

super

Parameterized Queries:

The functionality proposed works as expected. Throughout the
documentation, code and test cases the word "Parameterized" is spelled
incorrectly: "PARAMETRIZED_QUERIES"

fixed

set_from_file/set_from_bfile:

The functionality proposed worked fine, I was able to set variables in sql
from files. Minor typo in the documentation:
"The content is escapeaed as bytea value."

fixed

Show quoted text

Hope this helps!

Jason O'Donnell
Crunchy Data

The new status of this patch is: Waiting on Author

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachments:

psql-gstore-02.patchtext/x-patch; charset=US-ASCII; name=psql-gstore-02.patchDownload+249-8
psql-parameterized-queries-02.patchtext/x-patch; charset=US-ASCII; name=psql-parameterized-queries-02.patchDownload+172-17
psql-set-from-file-02.patchtext/x-patch; charset=US-ASCII; name=psql-set-from-file-02.patchDownload+209-2
#4Michael Paquier
michael@paquier.xyz
In reply to: Pavel Stehule (#3)
Re: Re: new set of psql patches for loading (saving) data from (to) text, binary files

On Wed, Jan 11, 2017 at 12:32 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:

Thank you for review

Moved to next CF 2017-03.
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Stephen Frost
sfrost@snowman.net
In reply to: Pavel Stehule (#3)
Re: new set of psql patches for loading (saving) data from (to) text, binary files

Pavel,

I started looking through this to see if it might be ready to commit and
I don't believe it is. Below are my comments about the first patch, I
didn't get to the point of looking at the others yet since this one had
issues.

* Pavel Stehule (pavel.stehule@gmail.com) wrote:

2017-01-09 17:24 GMT+01:00 Jason O'Donnell <odonnelljp01@gmail.com>:

gstore/gbstore:

I don't see the point to 'gstore'- how is that usefully different from
just using '\g'? Also, the comments around these are inconsistent, some
say they can only be used with a filename, others say it could be a
filename or a pipe+command.

There's a whitespace-only hunk that shouldn't be included.

I don't agree with the single-column/single-row restriction on these. I
can certainly see a case where someone might want to, say, dump out a
bunch of binary integers into a file for later processing.

The tab-completion for 'gstore' wasn't correct (you didn't include the
double-backslash). The patch also has conflicts against current master
now.

I guess my thinking about moving this forward would be to simplify it to
just '\gb' which will pull the data from the server side in binary
format and dump it out to the filename or command given. If there's a
new patch with those changes, I'll try to find time to look at it.

I would recommend going through a detailed review of the other patches
also before rebasing and re-submitting them also, in particular look to
make sure that the comments are correct and consistent, that there are
comments where there should be (generally speaking, whole functions
should have at least some comments in them, not just the function header
comment, etc).

Lastly, I'd suggest creating a 'psql.source' file for the regression
tests instead of just throwing things into 'misc.source'. Seems like we
should probably have more psql-related testing anyway and dumping
everything into 'misc.source' really isn't a good idea.

Thanks!

Stephen

#6Pavel Stehule
pavel.stehule@gmail.com
In reply to: Stephen Frost (#5)
Re: new set of psql patches for loading (saving) data from (to) text, binary files

Hi

2017-03-15 17:21 GMT+01:00 Stephen Frost <sfrost@snowman.net>:

Pavel,

I started looking through this to see if it might be ready to commit and
I don't believe it is. Below are my comments about the first patch, I
didn't get to the point of looking at the others yet since this one had
issues.

* Pavel Stehule (pavel.stehule@gmail.com) wrote:

2017-01-09 17:24 GMT+01:00 Jason O'Donnell <odonnelljp01@gmail.com>:

gstore/gbstore:

I don't see the point to 'gstore'- how is that usefully different from
just using '\g'? Also, the comments around these are inconsistent, some
say they can only be used with a filename, others say it could be a
filename or a pipe+command.

\gstore ensure dump row data. It can be replaced by \g with some other
setting, but if query is not unique, then the result can be messy. What is
not possible with \gbstore.

More interesting is \gbstore that uses binary API - it can be used for
bytea fields or for XML fields with implicit correct encoding change.
\gbstore is not possible to replace by \g.

There's a whitespace-only hunk that shouldn't be included.

I don't agree with the single-column/single-row restriction on these. I
can certainly see a case where someone might want to, say, dump out a
bunch of binary integers into a file for later processing.

The tab-completion for 'gstore' wasn't correct (you didn't include the
double-backslash). The patch also has conflicts against current master
now.

I guess my thinking about moving this forward would be to simplify it to
just '\gb' which will pull the data from the server side in binary
format and dump it out to the filename or command given. If there's a
new patch with those changes, I'll try to find time to look at it.

ok I'll prepare patch

Show quoted text

I would recommend going through a detailed review of the other patches
also before rebasing and re-submitting them also, in particular look to
make sure that the comments are correct and consistent, that there are
comments where there should be (generally speaking, whole functions
should have at least some comments in them, not just the function header
comment, etc).

Lastly, I'd suggest creating a 'psql.source' file for the regression
tests instead of just throwing things into 'misc.source'. Seems like we
should probably have more psql-related testing anyway and dumping
everything into 'misc.source' really isn't a good idea.

Thanks!

Stephen

#7Stephen Frost
sfrost@snowman.net
In reply to: Pavel Stehule (#6)
Re: new set of psql patches for loading (saving) data from (to) text, binary files

Pavel,

* Pavel Stehule (pavel.stehule@gmail.com) wrote:

2017-03-15 17:21 GMT+01:00 Stephen Frost <sfrost@snowman.net>:

I started looking through this to see if it might be ready to commit and
I don't believe it is. Below are my comments about the first patch, I
didn't get to the point of looking at the others yet since this one had
issues.

* Pavel Stehule (pavel.stehule@gmail.com) wrote:

2017-01-09 17:24 GMT+01:00 Jason O'Donnell <odonnelljp01@gmail.com>:

gstore/gbstore:

I don't see the point to 'gstore'- how is that usefully different from
just using '\g'? Also, the comments around these are inconsistent, some
say they can only be used with a filename, others say it could be a
filename or a pipe+command.

\gstore ensure dump row data. It can be replaced by \g with some other
setting, but if query is not unique, then the result can be messy. What is
not possible with \gbstore.

I don't understand what you mean by "the result can be messy." We have
lots of options for controlling the output of the query and those can be
used with \g just fine. This seems like what you're doing is inventing
something entirely new which is exactly the same as setting the right
options which already exist and that seems odd to me.

Is it any different from setting \a and \t and then calling \g? If not,
then I don't see why it would be useful to add.

More interesting is \gbstore that uses binary API - it can be used for
bytea fields or for XML fields with implicit correct encoding change.
\gbstore is not possible to replace by \g.

Yes, having a way to get binary data out using psql and into a file is
interesting and I agree that we should have that capability.

Further, what I think we will definitely need is a way to get binary
data out using psql at the command-line too. We have the -A and -t
switches which correspond to \a and \t, we should have something for
this too. Perhaps what that really calls for is a '\b' and a '-B'
option to go with it which will flip psql into binary mode, similar to
the other Formatting options. I realize it might seem a bit
counter-intuitive, but I can actually see use-cases for having binary
data spit out to $PAGER (when you have a $PAGER that handles it
cleanly, as less does, for example).

There's a whitespace-only hunk that shouldn't be included.

I don't agree with the single-column/single-row restriction on these. I
can certainly see a case where someone might want to, say, dump out a
bunch of binary integers into a file for later processing.

The tab-completion for 'gstore' wasn't correct (you didn't include the
double-backslash). The patch also has conflicts against current master
now.

I guess my thinking about moving this forward would be to simplify it to
just '\gb' which will pull the data from the server side in binary
format and dump it out to the filename or command given. If there's a
new patch with those changes, I'll try to find time to look at it.

ok I'll prepare patch

Great, thanks!

Stephen

#8Pavel Stehule
pavel.stehule@gmail.com
In reply to: Stephen Frost (#7)
Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-03-16 22:01 GMT+01:00 Stephen Frost <sfrost@snowman.net>:

Pavel,

* Pavel Stehule (pavel.stehule@gmail.com) wrote:

2017-03-15 17:21 GMT+01:00 Stephen Frost <sfrost@snowman.net>:

I started looking through this to see if it might be ready to commit

and

I don't believe it is. Below are my comments about the first patch, I
didn't get to the point of looking at the others yet since this one had
issues.

* Pavel Stehule (pavel.stehule@gmail.com) wrote:

2017-01-09 17:24 GMT+01:00 Jason O'Donnell <odonnelljp01@gmail.com>:

gstore/gbstore:

I don't see the point to 'gstore'- how is that usefully different from
just using '\g'? Also, the comments around these are inconsistent,

some

say they can only be used with a filename, others say it could be a
filename or a pipe+command.

\gstore ensure dump row data. It can be replaced by \g with some other
setting, but if query is not unique, then the result can be messy. What

is

not possible with \gbstore.

I don't understand what you mean by "the result can be messy." We have
lots of options for controlling the output of the query and those can be
used with \g just fine. This seems like what you're doing is inventing
something entirely new which is exactly the same as setting the right
options which already exist and that seems odd to me.

Is it any different from setting \a and \t and then calling \g? If not,
then I don't see why it would be useful to add.

I am searching some comfortable way - I agree, it can be redundant to
already available functionality.

More interesting is \gbstore that uses binary API - it can be used for
bytea fields or for XML fields with implicit correct encoding change.
\gbstore is not possible to replace by \g.

Yes, having a way to get binary data out using psql and into a file is
interesting and I agree that we should have that capability.

Further, what I think we will definitely need is a way to get binary
data out using psql at the command-line too. We have the -A and -t
switches which correspond to \a and \t, we should have something for
this too. Perhaps what that really calls for is a '\b' and a '-B'
option to go with it which will flip psql into binary mode, similar to
the other Formatting options. I realize it might seem a bit
counter-intuitive, but I can actually see use-cases for having binary
data spit out to $PAGER (when you have a $PAGER that handles it
cleanly, as less does, for example).

It is interesting idea. I am not sure if it is more formatting option or
general psql option. But can be interesting for another purposes too.

One idea for import files to postgres via psql

we can introduce \gloadfrom that can replace parameters by files - and this
statement can work in text or in binary mode controlled by proposed option.

some like

insert into foo values('Pavel','Stehule', $1) \gloadfrom ~/avatar.jpg
insert into doc(id, doc) values(default, $1) \gloadfrom ~/mydoc.xml

Regards

Pavel

Show quoted text

There's a whitespace-only hunk that shouldn't be included.

I don't agree with the single-column/single-row restriction on these.

I

can certainly see a case where someone might want to, say, dump out a
bunch of binary integers into a file for later processing.

The tab-completion for 'gstore' wasn't correct (you didn't include the
double-backslash). The patch also has conflicts against current master
now.

I guess my thinking about moving this forward would be to simplify it

to

just '\gb' which will pull the data from the server side in binary
format and dump it out to the filename or command given. If there's a
new patch with those changes, I'll try to find time to look at it.

ok I'll prepare patch

Great, thanks!

Stephen

#9Pavel Stehule
pavel.stehule@gmail.com
In reply to: Stephen Frost (#7)
Re: new set of psql patches for loading (saving) data from (to) text, binary files

Hi

There's a whitespace-only hunk that shouldn't be included.

I don't agree with the single-column/single-row restriction on these.

I

can certainly see a case where someone might want to, say, dump out a
bunch of binary integers into a file for later processing.

The tab-completion for 'gstore' wasn't correct (you didn't include the
double-backslash). The patch also has conflicts against current master
now.

I guess my thinking about moving this forward would be to simplify it

to

just '\gb' which will pull the data from the server side in binary
format and dump it out to the filename or command given. If there's a
new patch with those changes, I'll try to find time to look at it.

ok I'll prepare patch

Great, thanks!

I rewrote these patches - it allows binary export/import from psql and the
code is very simple. The size of the patch is bigger due including 4KB
binary file (in hex format 8KB).

What is done:

create table foo foo(a bytea);

-- import
insert into foo values($1)
\gloadfrom ~/xxx.jpg bytea

-- export
\pset format binary
select a from foo
\g ~/xxx2.jpg

tested on import 55MB binary file

Comments, notes?

Available import formats are limited to text, bytea, xml - these formats
are safe for receiving data via recv function.

Regards

Pavel

Show quoted text

Stephen

Attachments:

psql-binary-export-import.patchtext/x-patch; charset=US-ASCII; name=psql-binary-export-import.patchDownload+313-14
#10Andres Freund
andres@anarazel.de
In reply to: Pavel Stehule (#9)
Re: Re: new set of psql patches for loading (saving) data from (to) text, binary files

Hi,

On 2017-03-18 17:51:48 +0100, Pavel Stehule wrote:

What is done:

create table foo foo(a bytea);

-- import
insert into foo values($1)
\gloadfrom ~/xxx.jpg bytea

-- export
\pset format binary
select a from foo
\g ~/xxx2.jpg

tested on import 55MB binary file

Comments, notes?

Available import formats are limited to text, bytea, xml - these formats
are safe for receiving data via recv function.

I don't think we have design agreement on this at this point. Given the
upcoming code freeze, I think we'll have to hash this out during the
next development cycle. Any counterarguments?

- Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Andres Freund
andres@anarazel.de
In reply to: Pavel Stehule (#9)
Re: Re: new set of psql patches for loading (saving) data from (to) text, binary files

On 2017-03-18 17:51:48 +0100, Pavel Stehule wrote:

What is done:

create table foo foo(a bytea);

-- import
insert into foo values($1)
\gloadfrom ~/xxx.jpg bytea

-- export
\pset format binary
select a from foo
\g ~/xxx2.jpg

tested on import 55MB binary file

Comments, notes?

I don't like the API here much. Loading requires knowledge of some
magic $1 value and allows only a single column, printing doesn't mean
much when there's multiple columns / rows.

I think the loading side of things should be redesigned into a more
general facility for providing query parameters. E.g. something like
\setparam $1 'whateva'
\setparamfromfile $2 'somefile'
\setparamfromprogram $3 cat /frakbar

which then would get used in the next query sent to the server. That'd
allow importing multiple columns, and it'd be useful for other purposes
than just loading binary data.

I don't yet have a good idea how to deal with moving individual cells
into files, so they can be loaded. One approach would be to to have
something like

\storequeryresult filename_template.%row.%column

which'd then print the current query buffer into the relevant file after
doing replacement on %row and %column.

I don't think we can find an API we agree upon in the next 48h...

- Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Stephen Frost
sfrost@snowman.net
In reply to: Andres Freund (#11)
Re: Re: new set of psql patches for loading (saving) data from (to) text, binary files

Andres,

* Andres Freund (andres@anarazel.de) wrote:

I don't like the API here much. Loading requires knowledge of some
magic $1 value and allows only a single column, printing doesn't mean
much when there's multiple columns / rows.

I think the loading side of things should be redesigned into a more
general facility for providing query parameters. E.g. something like
\setparam $1 'whateva'
\setparamfromfile $2 'somefile'
\setparamfromprogram $3 cat /frakbar

which then would get used in the next query sent to the server. That'd
allow importing multiple columns, and it'd be useful for other purposes
than just loading binary data.

I tend to agree that the loading side should probably be thought through
more.

I don't yet have a good idea how to deal with moving individual cells
into files, so they can be loaded. One approach would be to to have
something like

\storequeryresult filename_template.%row.%column

which'd then print the current query buffer into the relevant file after
doing replacement on %row and %column.

I don't actually agree that there's a problem having the output from a
query stored direclty in binary form into a single file. The above
approach seems to imply that every binary result must go into an
independent file, and perhaps that would be useful in some cases, but I
don't see it as required.

I don't think we can find an API we agree upon in the next 48h...

Now that there's more than one opinion being voiced on the API, I tend
to agree with this. Hopefully we can keep the discussion moving
forward, however, as I do see value in this capability in general.

Thanks!

Stephen

#13Andres Freund
andres@anarazel.de
In reply to: Stephen Frost (#12)
Re: Re: new set of psql patches for loading (saving) data from (to) text, binary files

Hi,

On 2017-04-05 21:07:59 -0400, Stephen Frost wrote:

* Andres Freund (andres@anarazel.de) wrote:

I don't like the API here much. Loading requires knowledge of some
magic $1 value and allows only a single column, printing doesn't mean
much when there's multiple columns / rows.

I think the loading side of things should be redesigned into a more
general facility for providing query parameters. E.g. something like
\setparam $1 'whateva'
\setparamfromfile $2 'somefile'
\setparamfromprogram $3 cat /frakbar

which then would get used in the next query sent to the server. That'd
allow importing multiple columns, and it'd be useful for other purposes
than just loading binary data.

I tend to agree that the loading side should probably be thought through
more.

I don't yet have a good idea how to deal with moving individual cells
into files, so they can be loaded. One approach would be to to have
something like

\storequeryresult filename_template.%row.%column

which'd then print the current query buffer into the relevant file after
doing replacement on %row and %column.

I don't actually agree that there's a problem having the output from a
query stored direclty in binary form into a single file. The above
approach seems to imply that every binary result must go into an
independent file, and perhaps that would be useful in some cases, but I
don't see it as required.

Well, it'd not be enforced - it'd depend on your template. But for a
lot of types of files, it'd not make sense to store multiple
columns/rows in file. Particularly for ones where printing them out to
files is actually meaningful (i.e. binary ones).

- Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Stephen Frost
sfrost@snowman.net
In reply to: Andres Freund (#13)
Re: Re: new set of psql patches for loading (saving) data from (to) text, binary files

Andres,

* Andres Freund (andres@anarazel.de) wrote:

On 2017-04-05 21:07:59 -0400, Stephen Frost wrote:

* Andres Freund (andres@anarazel.de) wrote:

I don't yet have a good idea how to deal with moving individual cells
into files, so they can be loaded. One approach would be to to have
something like

\storequeryresult filename_template.%row.%column

which'd then print the current query buffer into the relevant file after
doing replacement on %row and %column.

I don't actually agree that there's a problem having the output from a
query stored direclty in binary form into a single file. The above
approach seems to imply that every binary result must go into an
independent file, and perhaps that would be useful in some cases, but I
don't see it as required.

Well, it'd not be enforced - it'd depend on your template. But for a
lot of types of files, it'd not make sense to store multiple
columns/rows in file. Particularly for ones where printing them out to
files is actually meaningful (i.e. binary ones).

Having the template not require the row/column place-holders included
strikes me as more likely to be confusing. My initial thinking around
this was that users who actually want independent files would simply
issue independent queries, while users who want to take a bunch of int4
columns and dump them into a single binary file would be able to do so
easily.

I'm not against adding the ability for a single query result to be saved
into independent files, but it strikes me as feature creep on this basic
capability. Further, I don't see any particular reason why splitting up
the output from a query into multiple files is only relevant for binary
data.

Thanks!

Stephen

#15Pavel Stehule
pavel.stehule@gmail.com
In reply to: Stephen Frost (#14)
Re: Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-04-06 3:34 GMT+02:00 Stephen Frost <sfrost@snowman.net>:

Andres,

* Andres Freund (andres@anarazel.de) wrote:

On 2017-04-05 21:07:59 -0400, Stephen Frost wrote:

* Andres Freund (andres@anarazel.de) wrote:

I don't yet have a good idea how to deal with moving individual cells
into files, so they can be loaded. One approach would be to to have
something like

\storequeryresult filename_template.%row.%column

which'd then print the current query buffer into the relevant file

after

doing replacement on %row and %column.

I don't actually agree that there's a problem having the output from a
query stored direclty in binary form into a single file. The above
approach seems to imply that every binary result must go into an
independent file, and perhaps that would be useful in some cases, but I
don't see it as required.

Well, it'd not be enforced - it'd depend on your template. But for a
lot of types of files, it'd not make sense to store multiple
columns/rows in file. Particularly for ones where printing them out to
files is actually meaningful (i.e. binary ones).

Having the template not require the row/column place-holders included
strikes me as more likely to be confusing. My initial thinking around
this was that users who actually want independent files would simply
issue independent queries, while users who want to take a bunch of int4
columns and dump them into a single binary file would be able to do so
easily.

I'm not against adding the ability for a single query result to be saved
into independent files, but it strikes me as feature creep on this basic
capability. Further, I don't see any particular reason why splitting up
the output from a query into multiple files is only relevant for binary
data.

The files can be simply joined together outside psql

Personally I prefer relation type: single field, single file in special g
command - because I can simply off all formatting and result should be
correct every time.

Stephen, have you some use case for your request?

Regards

Pavel

Show quoted text

Thanks!

Stephen

#16Stephen Frost
sfrost@snowman.net
In reply to: Pavel Stehule (#15)
Re: Re: new set of psql patches for loading (saving) data from (to) text, binary files

Greetings,

* Pavel Stehule (pavel.stehule@gmail.com) wrote:

2017-04-06 3:34 GMT+02:00 Stephen Frost <sfrost@snowman.net>:

Having the template not require the row/column place-holders included
strikes me as more likely to be confusing. My initial thinking around
this was that users who actually want independent files would simply
issue independent queries, while users who want to take a bunch of int4
columns and dump them into a single binary file would be able to do so
easily.

I'm not against adding the ability for a single query result to be saved
into independent files, but it strikes me as feature creep on this basic
capability. Further, I don't see any particular reason why splitting up
the output from a query into multiple files is only relevant for binary
data.

The files can be simply joined together outside psql

Just as multiple queries could be done to have the results put into
independent files.

Personally I prefer relation type: single field, single file in special g
command - because I can simply off all formatting and result should be
correct every time.

Not sure why you think there would be a formatting issue or why the
result might not be 'correct'.

Stephen, have you some use case for your request?

The initial patch forced a single value result. Including such a
restriction doesn't seem necessary to me. As for use-case, I've
certainly written code to work with binary-result data from PG
previously and it seems entirely reasonable that someone might wish to
pull data into a file with psql and then process it. I've been
wondering if we should consider how binary-mode COPY works, but that
format ends up being pretty inefficient due to the repeated 32-bit
length value for every field.

My initial reaction was primairly that I didn't see value in the
somewhat arbitrary restriction being imposed on usage of this.

Thanks!

Stephen

#17Pavel Stehule
pavel.stehule@gmail.com
In reply to: Stephen Frost (#16)
Re: Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-04-06 14:47 GMT+02:00 Stephen Frost <sfrost@snowman.net>:

Greetings,

* Pavel Stehule (pavel.stehule@gmail.com) wrote:

2017-04-06 3:34 GMT+02:00 Stephen Frost <sfrost@snowman.net>:

Having the template not require the row/column place-holders included
strikes me as more likely to be confusing. My initial thinking around
this was that users who actually want independent files would simply
issue independent queries, while users who want to take a bunch of int4
columns and dump them into a single binary file would be able to do so
easily.

I'm not against adding the ability for a single query result to be

saved

into independent files, but it strikes me as feature creep on this

basic

capability. Further, I don't see any particular reason why splitting

up

the output from a query into multiple files is only relevant for binary
data.

The files can be simply joined together outside psql

Just as multiple queries could be done to have the results put into
independent files.

Personally I prefer relation type: single field, single file in special

g

command - because I can simply off all formatting and result should be
correct every time.

Not sure why you think there would be a formatting issue or why the
result might not be 'correct'.

Stephen, have you some use case for your request?

The initial patch forced a single value result. Including such a
restriction doesn't seem necessary to me. As for use-case, I've
certainly written code to work with binary-result data from PG
previously and it seems entirely reasonable that someone might wish to
pull data into a file with psql and then process it. I've been
wondering if we should consider how binary-mode COPY works, but that
format ends up being pretty inefficient due to the repeated 32-bit
length value for every field.

My initial reaction was primairly that I didn't see value in the
somewhat arbitrary restriction being imposed on usage of this.

ok.

It is hard to design any solution - because there are not any intersection
on this basic simple things.

Regards

Pavel

Show quoted text

Thanks!

Stephen

#18Peter Eisentraut
peter_e@gmx.net
In reply to: Pavel Stehule (#9)
Re: Re: new set of psql patches for loading (saving) data from (to) text, binary files

On 3/18/17 12:51, Pavel Stehule wrote:

I rewrote these patches - it allows binary export/import from psql and
the code is very simple. The size of the patch is bigger due including
4KB binary file (in hex format 8KB).

This patch needs (at least) a rebase for the upcoming commit fest.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers