Postgresql-fdw
I have data storage in flat files (structured and unstructured) . I want to
run sql queries on that , so i am looking in to postgresql how to use fdw
on the data that i have. I want to prepare an api for running sql queries
for the data in files,so i am trying to have separate postgresql engine and
write an wrappper for connecting the data that are in files.
Help me out how can i proceed.
Thanks
aluka
Sent with MailTrack
<https://mailtrack.io/install?source=signature&lang=en&referral=alukaraju2894@gmail.com&idSignature=22>
You can build a multicorn fdw http://multicorn.org/
Regards
On May 23, 2016 7:54 AM, "aluka raju" <alukaraju2894@gmail.com> wrote:
I have data storage in flat files (structured and unstructured) . I want
to run sql queries on that , so i am looking in to postgresql how to use
fdw on the data that i have. I want to prepare an api for running sql
queries for the data in files,so i am trying to have separate postgresql
engine and write an wrappper for connecting the data that are in files.
Show quoted text
Help me out how can i proceed.
Thanks
alukaSent with MailTrack
On 5/22/2016 10:52 PM, aluka raju wrote:
I have data storage in flat files (structured and unstructured) . I
want to run sql queries on that , so i am looking in to postgresql how
to use fdw on the data that i have. I want to prepare an api for
running sql queries for the data in files,so i am trying to have
separate postgresql engine and write an wrappper for connecting the
data that are in files.Help me out how can i proceed.
I'm not sure what you're expecting postgres to do for you here... Flat
unstructured files have to be read sequentially to do /anything/.
even a simple 'select * from someflattable where id=115', it will have
to read the whole file to find any and all records with id=115.
If you want to use postgres to query this data efficiently, you really
should import this data into postgres tables, properly indexed for the
sorts of queries you wish to do.
--
john r pierce, recycling bits in santa cruz
On Sun, May 22, 2016 at 23:38:43 -0700,
John R Pierce <pierce@hogranch.com> wrote:
If you want to use postgres to query this data efficiently, you really
should import this data into postgres tables, properly indexed for the
sorts of queries you wish to do.
And it isn't that hard to script this kind of thing. Postgres' copy command
makes it easy to read csv files. You could trigger the scripts by hand (or
as part of the script that runs the queries) just before running queries, run
them scheduled at what are normally good times to pick up updates or trigger
off file changes.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Mon, May 23, 2016 at 1:52 AM, aluka raju <alukaraju2894@gmail.com> wrote:
I have data storage in flat files (structured and unstructured) . I want to
run sql queries on that , so i am looking in to postgresql how to use fdw
on the data that i have.
You could use the file_fdw to "attach" the files as foreign tables. At
least for the structured ones (CSV, delimited). As noted by other replies,
this won't be efficient, but if you are doing analytics against all the
data in the file or just scanning the data 1 time this works well: