question

Started by anj patnaikover 10 years ago20 messagesgeneral
Jump to latest
#1anj patnaik
patna73@gmail.com

Hello,

I recently downloaded postgres 9.4 and I have a client application that
runs in Tcl that inserts to the db and fetches records.

For the majority of the time, the app will connect to the server to do
insert/fetch.

For occasional use, we want to remove the requirement to have a server db
and just have the application retrieve data from a local file.

I know I can use pg_dump to export the tables. The questions are:

1) is there an in-memory db instance or file based I can create that is
loaded with the dump file? This way the app code doesn't have to change.

2) does pg support embedded db?
3) Or is my best option to convert the dump to sqlite and the import the
sqlite and have the app read that embedded db.

Finally, I am noticing pg_dump takes a lot of time to create a dump of my
table. right now, the table has 77K rows. Are there any ways to create
automated batch files to create dumps overnight and do so quickly?

Thanks for your inputs!

#2Adrian Klaver
adrian.klaver@aklaver.com
In reply to: anj patnaik (#1)
Re: question

On 10/14/2015 06:39 PM, anj patnaik wrote:

Hello,

I recently downloaded postgres 9.4 and I have a client application that
runs in Tcl that inserts to the db and fetches records.

For the majority of the time, the app will connect to the server to do
insert/fetch.

For occasional use, we want to remove the requirement to have a server
db and just have the application retrieve data from a local file.

I know I can use pg_dump to export the tables. The questions are:

1) is there an in-memory db instance or file based I can create that is
loaded with the dump file? This way the app code doesn't have to change.

No.

2) does pg support embedded db?

No.

3) Or is my best option to convert the dump to sqlite and the import the
sqlite and have the app read that embedded db.

Sqlite tends to follow Postgres conventions, so you might be able to use
the pg_dump output directly if you use --inserts or --column-inserts:

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html

Finally, I am noticing pg_dump takes a lot of time to create a dump of
my table. right now, the table has 77K rows. Are there any ways to
create automated batch files to create dumps overnight and do so quickly?

Define long time.

What is the pg_dump command you are using?

Sure use a cron job.

Thanks for your inputs!

--
Adrian Klaver
adrian.klaver@aklaver.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

#3Melvin Davidson
melvin6925@gmail.com
In reply to: Adrian Klaver (#2)
Re: question

In addition to exactly what you mean by "a long time" to pg_dump 77k of
your table,

What is your O/S and how much memory is on your system?
How many CPU's are in your system?
Also, what is your hard disk configuration?
What other applications are running simultaneously with pg_dump?
What is the value of shared_memory & maintenance_work_mem in
postgresql.conf?

On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <adrian.klaver@aklaver.com>
wrote:

On 10/14/2015 06:39 PM, anj patnaik wrote:

Hello,

I recently downloaded postgres 9.4 and I have a client application that
runs in Tcl that inserts to the db and fetches records.

For the majority of the time, the app will connect to the server to do
insert/fetch.

For occasional use, we want to remove the requirement to have a server
db and just have the application retrieve data from a local file.

I know I can use pg_dump to export the tables. The questions are:

1) is there an in-memory db instance or file based I can create that is
loaded with the dump file? This way the app code doesn't have to change.

No.

2) does pg support embedded db?

No.

3) Or is my best option to convert the dump to sqlite and the import the

sqlite and have the app read that embedded db.

Sqlite tends to follow Postgres conventions, so you might be able to use
the pg_dump output directly if you use --inserts or --column-inserts:

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html

Finally, I am noticing pg_dump takes a lot of time to create a dump of
my table. right now, the table has 77K rows. Are there any ways to
create automated batch files to create dumps overnight and do so quickly?

Define long time.

What is the pg_dump command you are using?

Sure use a cron job.

Thanks for your inputs!

--
Adrian Klaver
adrian.klaver@aklaver.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

--
*Melvin Davidson*
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.

#4anj patnaik
patna73@gmail.com
In reply to: Melvin Davidson (#3)
Re: question

It's a Linux machine with 8 CPUs. I don't have the other details.

I get archive member too large for tar format.

Is there a recommended command/options when dealing with very large tables,
aka 150K rows and half of the rows have data being inserted with 22MB?

-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w > /tmp/dump
pg_dump: [archiver (db)] connection to database "postgres" failed:
fe_sendauth: no password supplied
-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t > /tmp/dump
Password:
pg_dump: [tar archiver] archive member too large for tar format
-bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
-bash: pg_dumpall: command not found
-bash: tmpdb.out-2015101510.gz: Permission denied
-bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz

Thank you so much for replying and accepting my post to this NG.

On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson <melvin6925@gmail.com>
wrote:

Show quoted text

In addition to exactly what you mean by "a long time" to pg_dump 77k of
your table,

What is your O/S and how much memory is on your system?
How many CPU's are in your system?
Also, what is your hard disk configuration?
What other applications are running simultaneously with pg_dump?
What is the value of shared_memory & maintenance_work_mem in
postgresql.conf?

On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <adrian.klaver@aklaver.com

wrote:

On 10/14/2015 06:39 PM, anj patnaik wrote:

Hello,

I recently downloaded postgres 9.4 and I have a client application that
runs in Tcl that inserts to the db and fetches records.

For the majority of the time, the app will connect to the server to do
insert/fetch.

For occasional use, we want to remove the requirement to have a server
db and just have the application retrieve data from a local file.

I know I can use pg_dump to export the tables. The questions are:

1) is there an in-memory db instance or file based I can create that is
loaded with the dump file? This way the app code doesn't have to change.

No.

2) does pg support embedded db?

No.

3) Or is my best option to convert the dump to sqlite and the import the

sqlite and have the app read that embedded db.

Sqlite tends to follow Postgres conventions, so you might be able to use
the pg_dump output directly if you use --inserts or --column-inserts:

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html

Finally, I am noticing pg_dump takes a lot of time to create a dump of
my table. right now, the table has 77K rows. Are there any ways to
create automated batch files to create dumps overnight and do so quickly?

Define long time.

What is the pg_dump command you are using?

Sure use a cron job.

Thanks for your inputs!

--
Adrian Klaver
adrian.klaver@aklaver.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

--
*Melvin Davidson*
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.

#5Guillaume Lelarge
guillaume@lelarge.info
In reply to: anj patnaik (#4)
Re: question

2015-10-15 20:40 GMT+02:00 anj patnaik <patna73@gmail.com>:

It's a Linux machine with 8 CPUs. I don't have the other details.

I get archive member too large for tar format.

Is there a recommended command/options when dealing with very large
tables, aka 150K rows and half of the rows have data being inserted with
22MB?

Don't use tar format? I never understood the interest on this one. You
should better use the custom method.

-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w > /tmp/dump
pg_dump: [archiver (db)] connection to database "postgres" failed:
fe_sendauth: no password supplied
-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t > /tmp/dump
Password:
pg_dump: [tar archiver] archive member too large for tar format
-bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
-bash: pg_dumpall: command not found
-bash: tmpdb.out-2015101510.gz: Permission denied
-bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz

Thank you so much for replying and accepting my post to this NG.

On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson <melvin6925@gmail.com>
wrote:

In addition to exactly what you mean by "a long time" to pg_dump 77k of
your table,

What is your O/S and how much memory is on your system?
How many CPU's are in your system?
Also, what is your hard disk configuration?
What other applications are running simultaneously with pg_dump?
What is the value of shared_memory & maintenance_work_mem in
postgresql.conf?

On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <
adrian.klaver@aklaver.com> wrote:

On 10/14/2015 06:39 PM, anj patnaik wrote:

Hello,

I recently downloaded postgres 9.4 and I have a client application that
runs in Tcl that inserts to the db and fetches records.

For the majority of the time, the app will connect to the server to do
insert/fetch.

For occasional use, we want to remove the requirement to have a server
db and just have the application retrieve data from a local file.

I know I can use pg_dump to export the tables. The questions are:

1) is there an in-memory db instance or file based I can create that is
loaded with the dump file? This way the app code doesn't have to change.

No.

2) does pg support embedded db?

No.

3) Or is my best option to convert the dump to sqlite and the import the

sqlite and have the app read that embedded db.

Sqlite tends to follow Postgres conventions, so you might be able to use
the pg_dump output directly if you use --inserts or --column-inserts:

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html

Finally, I am noticing pg_dump takes a lot of time to create a dump of
my table. right now, the table has 77K rows. Are there any ways to
create automated batch files to create dumps overnight and do so
quickly?

Define long time.

What is the pg_dump command you are using?

Sure use a cron job.

Thanks for your inputs!

--
Adrian Klaver
adrian.klaver@aklaver.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

--
*Melvin Davidson*
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.

--
Guillaume.
http://blog.guillaume.lelarge.info
http://www.dalibo.com

#6Melvin Davidson
melvin6925@gmail.com
In reply to: anj patnaik (#4)
Re: question

You stated you wanted to dump just one table, but your command is dumping
the whole database!

So if you truly want to dump just a single table, then change your command
to:

pg_dump -t RECORDER postgres --format=t -t your_table_name -w > /tmp/dump

Also, please explain why you cannot provide the other required information.
Are you not the DBA? If that is the case, then I can only encourage you to
consult with him/her.

On Thu, Oct 15, 2015 at 2:40 PM, anj patnaik <patna73@gmail.com> wrote:

It's a Linux machine with 8 CPUs. I don't have the other details.

I get archive member too large for tar format.

Is there a recommended command/options when dealing with very large
tables, aka 150K rows and half of the rows have data being inserted with
22MB?

-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w > /tmp/dump
pg_dump: [archiver (db)] connection to database "postgres" failed:
fe_sendauth: no password supplied
-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t > /tmp/dump
Password:
pg_dump: [tar archiver] archive member too large for tar format
-bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
-bash: pg_dumpall: command not found
-bash: tmpdb.out-2015101510.gz: Permission denied
-bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz

Thank you so much for replying and accepting my post to this NG.

On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson <melvin6925@gmail.com>
wrote:

In addition to exactly what you mean by "a long time" to pg_dump 77k of
your table,

What is your O/S and how much memory is on your system?
How many CPU's are in your system?
Also, what is your hard disk configuration?
What other applications are running simultaneously with pg_dump?
What is the value of shared_memory & maintenance_work_mem in
postgresql.conf?

On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <
adrian.klaver@aklaver.com> wrote:

On 10/14/2015 06:39 PM, anj patnaik wrote:

Hello,

I recently downloaded postgres 9.4 and I have a client application that
runs in Tcl that inserts to the db and fetches records.

For the majority of the time, the app will connect to the server to do
insert/fetch.

For occasional use, we want to remove the requirement to have a server
db and just have the application retrieve data from a local file.

I know I can use pg_dump to export the tables. The questions are:

1) is there an in-memory db instance or file based I can create that is
loaded with the dump file? This way the app code doesn't have to change.

No.

2) does pg support embedded db?

No.

3) Or is my best option to convert the dump to sqlite and the import the

sqlite and have the app read that embedded db.

Sqlite tends to follow Postgres conventions, so you might be able to use
the pg_dump output directly if you use --inserts or --column-inserts:

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html

Finally, I am noticing pg_dump takes a lot of time to create a dump of
my table. right now, the table has 77K rows. Are there any ways to
create automated batch files to create dumps overnight and do so
quickly?

Define long time.

What is the pg_dump command you are using?

Sure use a cron job.

Thanks for your inputs!

--
Adrian Klaver
adrian.klaver@aklaver.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

--
*Melvin Davidson*
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.

--
*Melvin Davidson*
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.

#7Scott Mead
scottm@openscg.com
In reply to: Guillaume Lelarge (#5)
Re: question

On Thu, Oct 15, 2015 at 3:55 PM, Guillaume Lelarge <guillaume@lelarge.info>
wrote:

2015-10-15 20:40 GMT+02:00 anj patnaik <patna73@gmail.com>:

It's a Linux machine with 8 CPUs. I don't have the other details.

I get archive member too large for tar format.

Is there a recommended command/options when dealing with very large
tables, aka 150K rows and half of the rows have data being inserted with
22MB?

Don't use tar format? I never understood the interest on this one. You
should better use the custom method.

+ 1

Use -F c

--
Scott Mead
Sr. Architect
*OpenSCG*
PostgreSQL, Java & Linux Experts

http://openscg.com

Show quoted text

-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w > /tmp/dump
pg_dump: [archiver (db)] connection to database "postgres" failed:
fe_sendauth: no password supplied
-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t > /tmp/dump
Password:
pg_dump: [tar archiver] archive member too large for tar format
-bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
-bash: pg_dumpall: command not found
-bash: tmpdb.out-2015101510.gz: Permission denied
-bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz

Thank you so much for replying and accepting my post to this NG.

On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson <melvin6925@gmail.com>
wrote:

In addition to exactly what you mean by "a long time" to pg_dump 77k of
your table,

What is your O/S and how much memory is on your system?
How many CPU's are in your system?
Also, what is your hard disk configuration?
What other applications are running simultaneously with pg_dump?
What is the value of shared_memory & maintenance_work_mem in
postgresql.conf?

On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <
adrian.klaver@aklaver.com> wrote:

On 10/14/2015 06:39 PM, anj patnaik wrote:

Hello,

I recently downloaded postgres 9.4 and I have a client application that
runs in Tcl that inserts to the db and fetches records.

For the majority of the time, the app will connect to the server to do
insert/fetch.

For occasional use, we want to remove the requirement to have a server
db and just have the application retrieve data from a local file.

I know I can use pg_dump to export the tables. The questions are:

1) is there an in-memory db instance or file based I can create that is
loaded with the dump file? This way the app code doesn't have to
change.

No.

2) does pg support embedded db?

No.

3) Or is my best option to convert the dump to sqlite and the import the

sqlite and have the app read that embedded db.

Sqlite tends to follow Postgres conventions, so you might be able to
use the pg_dump output directly if you use --inserts or --column-inserts:

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html

Finally, I am noticing pg_dump takes a lot of time to create a dump of
my table. right now, the table has 77K rows. Are there any ways to
create automated batch files to create dumps overnight and do so
quickly?

Define long time.

What is the pg_dump command you are using?

Sure use a cron job.

Thanks for your inputs!

--
Adrian Klaver
adrian.klaver@aklaver.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

--
*Melvin Davidson*
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.

--
Guillaume.
http://blog.guillaume.lelarge.info
http://www.dalibo.com

#8anj patnaik
patna73@gmail.com
In reply to: Scott Mead (#7)
Re: question

Hello all,
I will experiment with -Fc (custom). The file is already growing very large.

I am running this:
./pg_dump -t RECORDER -Fc postgres | gzip > /tmp/dump

Are there any other options for large tables to run faster and occupy less
disk space?

Below is memory info:

[root@onxl5179 tmp]# cat /proc/meminfo
MemTotal: 16333720 kB
MemFree: 187736 kB
Buffers: 79696 kB
Cached: 11176616 kB
SwapCached: 2024 kB
Active: 11028784 kB
Inactive: 4561616 kB
Active(anon): 3839656 kB
Inactive(anon): 642416 kB
Active(file): 7189128 kB
Inactive(file): 3919200 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 33456120 kB
SwapFree: 33428960 kB
Dirty: 33892 kB
Writeback: 0 kB
AnonPages: 4332408 kB
Mapped: 201388 kB
Shmem: 147980 kB
Slab: 365380 kB
SReclaimable: 296732 kB
SUnreclaim: 68648 kB
KernelStack: 5888 kB
PageTables: 37720 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 41622980 kB
Committed_AS: 7148392 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 179848 kB
VmallocChunk: 34359548476 kB
HardwareCorrupted: 0 kB
AnonHugePages: 3950592 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 10240 kB
DirectMap2M: 16766976 kB

# CPUs=8
RHEL 6.5

The PG shared memory info is the defaults as I've not touched the .conf
file. I am not a DBA, just a test tools developer who needs to backup the
table efficiently. I am fairly new to PG and not an expert at Linux.

Also if there are recommended backup scripts/cron that you recommend,
please point them to me.

Thanks!!

On Thu, Oct 15, 2015 at 3:59 PM, Scott Mead <scottm@openscg.com> wrote:

Show quoted text

On Thu, Oct 15, 2015 at 3:55 PM, Guillaume Lelarge <guillaume@lelarge.info

wrote:

2015-10-15 20:40 GMT+02:00 anj patnaik <patna73@gmail.com>:

It's a Linux machine with 8 CPUs. I don't have the other details.

I get archive member too large for tar format.

Is there a recommended command/options when dealing with very large
tables, aka 150K rows and half of the rows have data being inserted with
22MB?

Don't use tar format? I never understood the interest on this one. You
should better use the custom method.

+ 1

Use -F c

--
Scott Mead
Sr. Architect
*OpenSCG*
PostgreSQL, Java & Linux Experts

http://openscg.com

-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w > /tmp/dump
pg_dump: [archiver (db)] connection to database "postgres" failed:
fe_sendauth: no password supplied
-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t > /tmp/dump
Password:
pg_dump: [tar archiver] archive member too large for tar format
-bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
-bash: pg_dumpall: command not found
-bash: tmpdb.out-2015101510.gz: Permission denied
-bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz

Thank you so much for replying and accepting my post to this NG.

On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson <melvin6925@gmail.com>
wrote:

In addition to exactly what you mean by "a long time" to pg_dump 77k of
your table,

What is your O/S and how much memory is on your system?
How many CPU's are in your system?
Also, what is your hard disk configuration?
What other applications are running simultaneously with pg_dump?
What is the value of shared_memory & maintenance_work_mem in
postgresql.conf?

On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <
adrian.klaver@aklaver.com> wrote:

On 10/14/2015 06:39 PM, anj patnaik wrote:

Hello,

I recently downloaded postgres 9.4 and I have a client application
that
runs in Tcl that inserts to the db and fetches records.

For the majority of the time, the app will connect to the server to do
insert/fetch.

For occasional use, we want to remove the requirement to have a server
db and just have the application retrieve data from a local file.

I know I can use pg_dump to export the tables. The questions are:

1) is there an in-memory db instance or file based I can create that
is
loaded with the dump file? This way the app code doesn't have to
change.

No.

2) does pg support embedded db?

No.

3) Or is my best option to convert the dump to sqlite and the import

the
sqlite and have the app read that embedded db.

Sqlite tends to follow Postgres conventions, so you might be able to
use the pg_dump output directly if you use --inserts or --column-inserts:

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html

Finally, I am noticing pg_dump takes a lot of time to create a dump of
my table. right now, the table has 77K rows. Are there any ways to
create automated batch files to create dumps overnight and do so
quickly?

Define long time.

What is the pg_dump command you are using?

Sure use a cron job.

Thanks for your inputs!

--
Adrian Klaver
adrian.klaver@aklaver.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

--
*Melvin Davidson*
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.

--
Guillaume.
http://blog.guillaume.lelarge.info
http://www.dalibo.com

#9Melvin Davidson
melvin6925@gmail.com
In reply to: anj patnaik (#8)
Re: question

The PostgreSQL default configuration is very conservative so as to insure
it will work on almost any system.
However, based on your latest information, you should definitely adjust
shared_buffers = 4GB
maintenance_work_mem = 512MB

Note that you will need to restart PostgreSQL for this to take effect.

On Thu, Oct 15, 2015 at 4:35 PM, anj patnaik <patna73@gmail.com> wrote:

Hello all,
I will experiment with -Fc (custom). The file is already growing very
large.

I am running this:
./pg_dump -t RECORDER -Fc postgres | gzip > /tmp/dump

Are there any other options for large tables to run faster and occupy less
disk space?

Below is memory info:

[root@onxl5179 tmp]# cat /proc/meminfo
MemTotal: 16333720 kB
MemFree: 187736 kB
Buffers: 79696 kB
Cached: 11176616 kB
SwapCached: 2024 kB
Active: 11028784 kB
Inactive: 4561616 kB
Active(anon): 3839656 kB
Inactive(anon): 642416 kB
Active(file): 7189128 kB
Inactive(file): 3919200 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 33456120 kB
SwapFree: 33428960 kB
Dirty: 33892 kB
Writeback: 0 kB
AnonPages: 4332408 kB
Mapped: 201388 kB
Shmem: 147980 kB
Slab: 365380 kB
SReclaimable: 296732 kB
SUnreclaim: 68648 kB
KernelStack: 5888 kB
PageTables: 37720 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 41622980 kB
Committed_AS: 7148392 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 179848 kB
VmallocChunk: 34359548476 kB
HardwareCorrupted: 0 kB
AnonHugePages: 3950592 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 10240 kB
DirectMap2M: 16766976 kB

# CPUs=8
RHEL 6.5

The PG shared memory info is the defaults as I've not touched the .conf
file. I am not a DBA, just a test tools developer who needs to backup the
table efficiently. I am fairly new to PG and not an expert at Linux.

Also if there are recommended backup scripts/cron that you recommend,
please point them to me.

Thanks!!

On Thu, Oct 15, 2015 at 3:59 PM, Scott Mead <scottm@openscg.com> wrote:

On Thu, Oct 15, 2015 at 3:55 PM, Guillaume Lelarge <
guillaume@lelarge.info> wrote:

2015-10-15 20:40 GMT+02:00 anj patnaik <patna73@gmail.com>:

It's a Linux machine with 8 CPUs. I don't have the other details.

I get archive member too large for tar format.

Is there a recommended command/options when dealing with very large
tables, aka 150K rows and half of the rows have data being inserted with
22MB?

Don't use tar format? I never understood the interest on this one. You
should better use the custom method.

+ 1

Use -F c

--
Scott Mead
Sr. Architect
*OpenSCG*
PostgreSQL, Java & Linux Experts

http://openscg.com

-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w > /tmp/dump
pg_dump: [archiver (db)] connection to database "postgres" failed:
fe_sendauth: no password supplied
-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t > /tmp/dump
Password:
pg_dump: [tar archiver] archive member too large for tar format
-bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
-bash: pg_dumpall: command not found
-bash: tmpdb.out-2015101510.gz: Permission denied
-bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz

Thank you so much for replying and accepting my post to this NG.

On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson <melvin6925@gmail.com

wrote:

In addition to exactly what you mean by "a long time" to pg_dump 77k
of your table,

What is your O/S and how much memory is on your system?
How many CPU's are in your system?
Also, what is your hard disk configuration?
What other applications are running simultaneously with pg_dump?
What is the value of shared_memory & maintenance_work_mem in
postgresql.conf?

On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <
adrian.klaver@aklaver.com> wrote:

On 10/14/2015 06:39 PM, anj patnaik wrote:

Hello,

I recently downloaded postgres 9.4 and I have a client application
that
runs in Tcl that inserts to the db and fetches records.

For the majority of the time, the app will connect to the server to
do
insert/fetch.

For occasional use, we want to remove the requirement to have a
server
db and just have the application retrieve data from a local file.

I know I can use pg_dump to export the tables. The questions are:

1) is there an in-memory db instance or file based I can create that
is
loaded with the dump file? This way the app code doesn't have to
change.

No.

2) does pg support embedded db?

No.

3) Or is my best option to convert the dump to sqlite and the import

the
sqlite and have the app read that embedded db.

Sqlite tends to follow Postgres conventions, so you might be able to
use the pg_dump output directly if you use --inserts or --column-inserts:

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html

Finally, I am noticing pg_dump takes a lot of time to create a dump
of
my table. right now, the table has 77K rows. Are there any ways to
create automated batch files to create dumps overnight and do so
quickly?

Define long time.

What is the pg_dump command you are using?

Sure use a cron job.

Thanks for your inputs!

--
Adrian Klaver
adrian.klaver@aklaver.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

--
*Melvin Davidson*
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.

--
Guillaume.
http://blog.guillaume.lelarge.info
http://www.dalibo.com

--
*Melvin Davidson*
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.

#10Adrian Klaver
adrian.klaver@aklaver.com
In reply to: anj patnaik (#8)
Re: question

On 10/15/2015 01:35 PM, anj patnaik wrote:

Hello all,
I will experiment with -Fc (custom). The file is already growing very large.

I am running this:
./pg_dump -t RECORDER -Fc postgres | gzip > /tmp/dump

Are there any other options for large tables to run faster and occupy
less disk space?

Yes, do not double compress. -Fc already compresses the file.

This information and a lot more can be found here:

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html

Below is memory info:

[root@onxl5179 tmp]# cat /proc/meminfo
MemTotal: 16333720 kB
MemFree: 187736 kB
Buffers: 79696 kB
Cached: 11176616 kB
SwapCached: 2024 kB
Active: 11028784 kB
Inactive: 4561616 kB
Active(anon): 3839656 kB
Inactive(anon): 642416 kB
Active(file): 7189128 kB
Inactive(file): 3919200 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 33456120 kB
SwapFree: 33428960 kB
Dirty: 33892 kB
Writeback: 0 kB
AnonPages: 4332408 kB
Mapped: 201388 kB
Shmem: 147980 kB
Slab: 365380 kB
SReclaimable: 296732 kB
SUnreclaim: 68648 kB
KernelStack: 5888 kB
PageTables: 37720 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 41622980 kB
Committed_AS: 7148392 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 179848 kB
VmallocChunk: 34359548476 kB
HardwareCorrupted: 0 kB
AnonHugePages: 3950592 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 10240 kB
DirectMap2M: 16766976 kB

# CPUs=8
RHEL 6.5

The PG shared memory info is the defaults as I've not touched the .conf
file. I am not a DBA, just a test tools developer who needs to backup
the table efficiently. I am fairly new to PG and not an expert at Linux.

Also if there are recommended backup scripts/cron that you recommend,
please point them to me.

Thanks!!

On Thu, Oct 15, 2015 at 3:59 PM, Scott Mead <scottm@openscg.com
<mailto:scottm@openscg.com>> wrote:

On Thu, Oct 15, 2015 at 3:55 PM, Guillaume Lelarge
<guillaume@lelarge.info <mailto:guillaume@lelarge.info>> wrote:

2015-10-15 20:40 GMT+02:00 anj patnaik <patna73@gmail.com
<mailto:patna73@gmail.com>>:

It's a Linux machine with 8 CPUs. I don't have the other
details.

I get archive member too large for tar format.

Is there a recommended command/options when dealing with
very large tables, aka 150K rows and half of the rows have
data being inserted with 22MB?

Don't use tar format? I never understood the interest on this
one. You should better use the custom method.

+ 1

Use -F c

--
Scott Mead
Sr. Architect
/OpenSCG/
PostgreSQL, Java & Linux Experts

http://openscg.com <http://openscg.com/&gt;

-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w  >
/tmp/dump
pg_dump: [archiver (db)] connection to database "postgres"
failed: fe_sendauth: no password supplied
-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t   >
/tmp/dump
Password:
pg_dump: [tar archiver] archive member too large for tar format
-bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date
+\%Y\%m\%d\%H`.gz
-bash: pg_dumpall: command not found
-bash: tmpdb.out-2015101510 <tel:2015101510>.gz: Permission
denied
-bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date
+\%Y\%m\%d\%H`.gz

Thank you so much for replying and accepting my post to this NG.

On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson
<melvin6925@gmail.com <mailto:melvin6925@gmail.com>> wrote:

In addition to exactly what you mean by "a long time" to
pg_dump 77k of your table,

What is your O/S and how much memory is on your system?
How many CPU's are in your system?
Also, what is your hard disk configuration?
What other applications are running simultaneously with
pg_dump?
What is the value of shared_memory &
maintenance_work_mem in postgresql.conf?

On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver
<adrian.klaver@aklaver.com
<mailto:adrian.klaver@aklaver.com>> wrote:

On 10/14/2015 06:39 PM, anj patnaik wrote:

Hello,

I recently downloaded postgres 9.4 and I have a
client application that
runs in Tcl that inserts to the db and fetches
records.

For the majority of the time, the app will
connect to the server to do
insert/fetch.

For occasional use, we want to remove the
requirement to have a server
db and just have the application retrieve data
from a local file.

I know I can use pg_dump to export the tables.
The questions are:

1) is there an in-memory db instance or file
based I can create that is
loaded with the dump file? This way the app code
doesn't have to change.

No.

2) does pg support embedded db?

No.

3) Or is my best option to convert the dump to
sqlite and the import the
sqlite and have the app read that embedded db.

Sqlite tends to follow Postgres conventions, so you
might be able to use the pg_dump output directly if
you use --inserts or --column-inserts:

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html

Finally, I am noticing pg_dump takes a lot of
time to create a dump of
my table. right now, the table has 77K rows.
Are there any ways to
create automated batch files to create dumps
overnight and do so quickly?

Define long time.

What is the pg_dump command you are using?

Sure use a cron job.

Thanks for your inputs!

--
Adrian Klaver
adrian.klaver@aklaver.com
<mailto:adrian.klaver@aklaver.com>

--
Sent via pgsql-general mailing list
(pgsql-general@postgresql.org
<mailto:pgsql-general@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

--
*Melvin Davidson*
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.

--
Guillaume.
http://blog.guillaume.lelarge.info
http://www.dalibo.com

--
Adrian Klaver
adrian.klaver@aklaver.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

#11Guillaume Lelarge
guillaume@lelarge.info
In reply to: Adrian Klaver (#10)
Re: question

2015-10-15 23:05 GMT+02:00 Adrian Klaver <adrian.klaver@aklaver.com>:

On 10/15/2015 01:35 PM, anj patnaik wrote:

Hello all,
I will experiment with -Fc (custom). The file is already growing very
large.

I am running this:
./pg_dump -t RECORDER -Fc postgres | gzip > /tmp/dump

Are there any other options for large tables to run faster and occupy
less disk space?

Yes, do not double compress. -Fc already compresses the file.

Right. But I'd say "use custom format but do not compress with pg_dump".
Use the -Z0 option to disable compression, and use an external
multi-threaded tool such as pigz or pbzip2 to get faster and better
compression.

--
Guillaume.
http://blog.guillaume.lelarge.info
http://www.dalibo.com

#12Francisco Olarte
folarte@peoplecall.com
In reply to: Guillaume Lelarge (#11)
Re: question

On Fri, Oct 16, 2015 at 8:27 AM, Guillaume Lelarge
<guillaume@lelarge.info> wrote:

2015-10-15 23:05 GMT+02:00 Adrian Klaver <adrian.klaver@aklaver.com>:

On 10/15/2015 01:35 PM, anj patnaik wrote:

...

./pg_dump -t RECORDER -Fc postgres | gzip > /tmp/dump
Are there any other options for large tables to run faster and occupy
less disk space?

Yes, do not double compress. -Fc already compresses the file.

Right. But I'd say "use custom format but do not compress with pg_dump". Use
the -Z0 option to disable compression, and use an external multi-threaded
tool such as pigz or pbzip2 to get faster and better compression.

Actually I would not recommend that, unless you are making a long term
or offsite copy. Doing it means you need to decompress the dump before
restoring or even testing it ( via i.e., pg_restore > /dev/null ).

And if you are pressed on disk space you may corner yourself using
that on a situation where you do NOT have enough disk space for an
uncompressed dump. Given you normally are nervous enough when
restoring, for normal operations I think built in compression is
better.

Also, I'm not current with the compressor Fc uses, I think it still is
gzip, which is not that bad and is normally quite fast ( In fact I do
not use that 'pbzip2', but I did some tests about a year ago and I
found bzip2 was beaten by xz quite easily ( That means on every level
of bzip2 one of the levels of xz beat it in BOTH size & time, that was
for my data, YMMV ).

Francisco Olarte.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

#13anj patnaik
patna73@gmail.com
In reply to: Francisco Olarte (#12)
Re: question

Thanks. what is the recommended command/options for backup and how to
restore?

I found the below online. let me know if this is better and how to restore.
Thank you

pg_dump -Fc '<Db-Name>' | xz -3 dump.xz

On Fri, Oct 16, 2015 at 4:05 AM, Francisco Olarte <folarte@peoplecall.com>
wrote:

Show quoted text

On Fri, Oct 16, 2015 at 8:27 AM, Guillaume Lelarge
<guillaume@lelarge.info> wrote:

2015-10-15 23:05 GMT+02:00 Adrian Klaver <adrian.klaver@aklaver.com>:

On 10/15/2015 01:35 PM, anj patnaik wrote:

...

./pg_dump -t RECORDER -Fc postgres | gzip > /tmp/dump
Are there any other options for large tables to run faster and occupy
less disk space?

Yes, do not double compress. -Fc already compresses the file.

Right. But I'd say "use custom format but do not compress with pg_dump".

Use

the -Z0 option to disable compression, and use an external multi-threaded
tool such as pigz or pbzip2 to get faster and better compression.

Actually I would not recommend that, unless you are making a long term
or offsite copy. Doing it means you need to decompress the dump before
restoring or even testing it ( via i.e., pg_restore > /dev/null ).

And if you are pressed on disk space you may corner yourself using
that on a situation where you do NOT have enough disk space for an
uncompressed dump. Given you normally are nervous enough when
restoring, for normal operations I think built in compression is
better.

Also, I'm not current with the compressor Fc uses, I think it still is
gzip, which is not that bad and is normally quite fast ( In fact I do
not use that 'pbzip2', but I did some tests about a year ago and I
found bzip2 was beaten by xz quite easily ( That means on every level
of bzip2 one of the levels of xz beat it in BOTH size & time, that was
for my data, YMMV ).

Francisco Olarte.

#14Adrian Klaver
adrian.klaver@aklaver.com
In reply to: anj patnaik (#13)
Re: question

On 10/16/2015 12:10 PM, anj patnaik wrote:

Thanks. what is the recommended command/options for backup and how to
restore?

I found the below online. let me know if this is better and how to
restore. Thank you

pg_dump -Fc '<Db-Name>' | xz -3 dump.xz

Again, why would compress an already compressed output?

Also online:

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html

http://www.postgresql.org/docs/9.4/interactive/app-pgrestore.html

They step you through the backup and restore process.

On Fri, Oct 16, 2015 at 4:05 AM, Francisco Olarte
<folarte@peoplecall.com <mailto:folarte@peoplecall.com>> wrote:

On Fri, Oct 16, 2015 at 8:27 AM, Guillaume Lelarge
<guillaume@lelarge.info <mailto:guillaume@lelarge.info>> wrote:

2015-10-15 23:05 GMT+02:00 Adrian Klaver <adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>>:

On 10/15/2015 01:35 PM, anj patnaik wrote:

...

./pg_dump -t RECORDER -Fc postgres | gzip > /tmp/dump
Are there any other options for large tables to run faster and occupy
less disk space?

Yes, do not double compress. -Fc already compresses the file.

Right. But I'd say "use custom format but do not compress with pg_dump". Use
the -Z0 option to disable compression, and use an external multi-threaded
tool such as pigz or pbzip2 to get faster and better compression.

Actually I would not recommend that, unless you are making a long term
or offsite copy. Doing it means you need to decompress the dump before
restoring or even testing it ( via i.e., pg_restore > /dev/null ).

And if you are pressed on disk space you may corner yourself using
that on a situation where you do NOT have enough disk space for an
uncompressed dump. Given you normally are nervous enough when
restoring, for normal operations I think built in compression is
better.

Also, I'm not current with the compressor Fc uses, I think it still is
gzip, which is not that bad and is normally quite fast ( In fact I do
not use that 'pbzip2', but I did some tests about a year ago and I
found bzip2 was beaten by xz quite easily ( That means on every level
of bzip2 one of the levels of xz beat it in BOTH size & time, that was
for my data, YMMV ).

Francisco Olarte.

--
Adrian Klaver
adrian.klaver@aklaver.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

#15anj patnaik
patna73@gmail.com
In reply to: Adrian Klaver (#14)
Re: question

My question is for Francisco who replied regarding xz. I was curious what
options he used. Thanks.

On Fri, Oct 16, 2015 at 3:14 PM, Adrian Klaver <adrian.klaver@aklaver.com>
wrote:

Show quoted text

On 10/16/2015 12:10 PM, anj patnaik wrote:

Thanks. what is the recommended command/options for backup and how to
restore?

I found the below online. let me know if this is better and how to
restore. Thank you

pg_dump -Fc '<Db-Name>' | xz -3 dump.xz

Again, why would compress an already compressed output?

Also online:

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html

http://www.postgresql.org/docs/9.4/interactive/app-pgrestore.html

They step you through the backup and restore process.

On Fri, Oct 16, 2015 at 4:05 AM, Francisco Olarte
<folarte@peoplecall.com <mailto:folarte@peoplecall.com>> wrote:

On Fri, Oct 16, 2015 at 8:27 AM, Guillaume Lelarge
<guillaume@lelarge.info <mailto:guillaume@lelarge.info>> wrote:

2015-10-15 23:05 GMT+02:00 Adrian Klaver <adrian.klaver@aklaver.com

<mailto:adrian.klaver@aklaver.com>>:

On 10/15/2015 01:35 PM, anj patnaik wrote:

...

./pg_dump -t RECORDER -Fc postgres | gzip > /tmp/dump
Are there any other options for large tables to run faster and

occupy

less disk space?

Yes, do not double compress. -Fc already compresses the file.

Right. But I'd say "use custom format but do not compress with

pg_dump". Use

the -Z0 option to disable compression, and use an external

multi-threaded

tool such as pigz or pbzip2 to get faster and better compression.

Actually I would not recommend that, unless you are making a long term
or offsite copy. Doing it means you need to decompress the dump before
restoring or even testing it ( via i.e., pg_restore > /dev/null ).

And if you are pressed on disk space you may corner yourself using
that on a situation where you do NOT have enough disk space for an
uncompressed dump. Given you normally are nervous enough when
restoring, for normal operations I think built in compression is
better.

Also, I'm not current with the compressor Fc uses, I think it still is
gzip, which is not that bad and is normally quite fast ( In fact I do
not use that 'pbzip2', but I did some tests about a year ago and I
found bzip2 was beaten by xz quite easily ( That means on every level
of bzip2 one of the levels of xz beat it in BOTH size & time, that was
for my data, YMMV ).

Francisco Olarte.

--
Adrian Klaver
adrian.klaver@aklaver.com

#16Francisco Olarte
folarte@peoplecall.com
In reply to: anj patnaik (#8)
Re: question

Hi Anj:

On Thu, Oct 15, 2015 at 10:35 PM, anj patnaik <patna73@gmail.com> wrote:

I will experiment with -Fc (custom). The file is already growing very
large.

​I do not recall if you've already provided them, but, how large? I mean,
if you have a large database, backup will take time and ocupy space, you
may be approaching.

As a benchmark, for intellectual satisfaction, the smallest backup you can
get is probably text format and then compress with the more agressive
option of your favorite compressor, but this is normally useless except for
very special cases.

My recomendation will be to use plain Fc for a backup, this is what I do.
Sometimes tweaking the -Z after tests, but normally in my experience the
default level is right. bear in mind DB disk tend to be expensive, backup
disks can be much cheaper and, unless you are keeping a lot of them,
backups are smaller. As an example, we have a server pair ( replicated ),
with a couple short stroked fast disks for the database and a couple
'normal' disks for first line backup in each one. Normal disks are about
ten times database disks, and easily fit 30 backups, so we can backup to
one of them, copy to the seconds, and replicate to the other in the server
pair, just using Fc. This because backup compress indexes quite well, by
reducing them to a 'CREATE INDEX', and the copy format used inside is
generally more compact than the layout used on disk ( which needs free
space, is framed and lot of other things ) and compresses quite well too.
If you are pressed for backup size, you normally have very special needs or
do not have a properly dimensioned system. But, to say anything more you
will need to provide some numbers ( how big is your database and backups,
how fast are you disks and things like this. In this case maybe hints can
be provided.

I am running this:
./pg_dump -t RECORDER -Fc postgres | gzip > /tmp/dump

​In this case gzip is useless. -Fc already uses gzip compression​ at the
member level. Doing it with -Z0 and then gzipping will gain you a bit,
obvously, as it will compress everything as a single chunk ( except if you
manage to hit a pathological case ), but I doubt it will be significant .

As pointed in other places you can use Fc+Z0 and then compress with a
'better' compresor you may get a smaller file, or get it faster, but
remember you'll need to decompress it before restoring ( this does not
happen for text format, as you can do stream restore, but the restore
options for text format are limited, it's an all or nothing approach unless
you are really fluent in stream editors ).

Francisco Olarte.

#17Francisco Olarte
folarte@peoplecall.com
In reply to: anj patnaik (#15)
Re: question

Hi Anj:

On Sat, Oct 17, 2015 at 3:11 AM, anj patnaik <patna73@gmail.com> wrote:

My question is for Francisco who replied regarding xz. I was curious what
options he used. Thanks.

1st, we do not normally top post on this list.

Second, I do not remember the exact options I use. I can look them up,
but they are going to be surely useless ( they are for a custom
process with does several things with files, it uses gzip ( in Fc
backups ), plus xz ( for some files which need to be kept for a long
time and are nearly never needed ), plus lzo ( as I found lzo
compressed temporary files were faster than uncompressed ones ), and a
lot of code. But in the development process we did a full comparison
of several compressor, and found what I stated with bzip2, it was
surpassed in every combination of options by xz ( plain bzip2, plain
xz ).

Francisco Olarte.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

#18anj patnaik
patna73@gmail.com
In reply to: Francisco Olarte (#17)
Re: question

Thanks Adrian. I've got postgres 9.4 running on a second RHEL 6.5 machine.
This is the machine I want to restore the pg_dump to.

Last week, I ran this command on the main PG 9.4 db server:

./pg_dump -t RECORDER -Fc postgres -Z0 | xz -9 > /tmp/dump_xz

The -Z0 turns off compression. So I used xz and noticed the file was not
that big.

Now, I got PG 9.4 running today on a 2nd machine. I just did the initdb,
started service, updated the hba.conf and postgres.conf to accept
connections from the outside and then restarted service.

1) When i attempt to restore the archived, I get an error to provide
password and I am confused what to do here.

2) is it correct to do xzcat and then pipe that to pgsl? I want to restore
on mymachine.com

3) on the target machine, I ran initdb and by default it creates postgres.
Will that be a problem when restoring?

On this machine I am connected as root.

xzcat dump_xz |/opt/PostgreSQL/9.4/bin/psql -h mymachine.com postgres
Password:

Thank you!

On Sat, Oct 17, 2015 at 7:42 AM, Francisco Olarte <folarte@peoplecall.com>
wrote:

Show quoted text

Hi Anj:

On Sat, Oct 17, 2015 at 3:11 AM, anj patnaik <patna73@gmail.com> wrote:

My question is for Francisco who replied regarding xz. I was curious what
options he used. Thanks.

1st, we do not normally top post on this list.

Second, I do not remember the exact options I use. I can look them up,
but they are going to be surely useless ( they are for a custom
process with does several things with files, it uses gzip ( in Fc
backups ), plus xz ( for some files which need to be kept for a long
time and are nearly never needed ), plus lzo ( as I found lzo
compressed temporary files were faster than uncompressed ones ), and a
lot of code. But in the development process we did a full comparison
of several compressor, and found what I stated with bzip2, it was
surpassed in every combination of options by xz ( plain bzip2, plain
xz ).

Francisco Olarte.

#19anj patnaik
patna73@gmail.com
In reply to: anj patnaik (#18)
Re: question

Quick update on the above:

1) I got past the password prompt by changing the pg_hba.conf on target
machine to trust.

2) I am still running this command to restore on remote machine and want
feedback as it appears to NOT create the desired table:

xzcat dump_xz |/opt/PostgreSQL/9.4/bin/psql -h mycomp.com postgres -U
postgres

Thanks.

On Fri, Oct 23, 2015 at 6:07 PM, anj patnaik <patna73@gmail.com> wrote:

Show quoted text

Thanks Adrian. I've got postgres 9.4 running on a second RHEL 6.5 machine.
This is the machine I want to restore the pg_dump to.

Last week, I ran this command on the main PG 9.4 db server:

./pg_dump -t RECORDER -Fc postgres -Z0 | xz -9 > /tmp/dump_xz

The -Z0 turns off compression. So I used xz and noticed the file was not
that big.

Now, I got PG 9.4 running today on a 2nd machine. I just did the initdb,
started service, updated the hba.conf and postgres.conf to accept
connections from the outside and then restarted service.

1) When i attempt to restore the archived, I get an error to provide
password and I am confused what to do here.

2) is it correct to do xzcat and then pipe that to pgsl? I want to restore
on mymachine.com

3) on the target machine, I ran initdb and by default it creates postgres.
Will that be a problem when restoring?

On this machine I am connected as root.

xzcat dump_xz |/opt/PostgreSQL/9.4/bin/psql -h mymachine.com postgres
Password:

Thank you!

On Sat, Oct 17, 2015 at 7:42 AM, Francisco Olarte <folarte@peoplecall.com>
wrote:

Hi Anj:

On Sat, Oct 17, 2015 at 3:11 AM, anj patnaik <patna73@gmail.com> wrote:

My question is for Francisco who replied regarding xz. I was curious

what

options he used. Thanks.

1st, we do not normally top post on this list.

Second, I do not remember the exact options I use. I can look them up,
but they are going to be surely useless ( they are for a custom
process with does several things with files, it uses gzip ( in Fc
backups ), plus xz ( for some files which need to be kept for a long
time and are nearly never needed ), plus lzo ( as I found lzo
compressed temporary files were faster than uncompressed ones ), and a
lot of code. But in the development process we did a full comparison
of several compressor, and found what I stated with bzip2, it was
surpassed in every combination of options by xz ( plain bzip2, plain
xz ).

Francisco Olarte.

#20Adrian Klaver
adrian.klaver@aklaver.com
In reply to: anj patnaik (#19)
Re: question

On 10/23/2015 03:22 PM, anj patnaik wrote:

Quick update on the above:

1) I got past the password prompt by changing the pg_hba.conf on target
machine to trust.

2) I am still running this command to restore on remote machine and want
feedback as it appears to NOT create the desired table:

xzcat dump_xz |/opt/PostgreSQL/9.4/bin/psql -h mycomp.com
<http://mycomp.com&gt; postgres -U postgres

I do not see that working for the reason below:

/pg_dump -t RECORDER -Fc postgres -Z0 | xz -9 > /tmp/dump_xz

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
"
F format
--format=format

Selects the format of the output. format can be one of the following:

p
plain

Output a plain-text SQL script file (the default).
c
custom

Output a custom-format archive suitable for input into
pg_restore. Together with the directory output format, this is the most
flexible output format in that it allows manual selection and reordering
of archived items during restore. This format is also compressed by
default. "

In particular- 'Output a custom-format archive suitable for input into
pg_restore. '

psql will not know what to do with this.

Thanks.

On Fri, Oct 23, 2015 at 6:07 PM, anj patnaik <patna73@gmail.com
<mailto:patna73@gmail.com>> wrote:

Thanks Adrian. I've got postgres 9.4 running on a second RHEL 6.5
machine. This is the machine I want to restore the pg_dump to.

Last week, I ran this command on the main PG 9.4 db server:

./pg_dump -t RECORDER -Fc postgres -Z0 | xz -9 > /tmp/dump_xz

The -Z0 turns off compression. So I used xz and noticed the file was
not that big.

Now, I got PG 9.4 running today on a 2nd machine. I just did the
initdb, started service, updated the hba.conf and postgres.conf to
accept connections from the outside and then restarted service.

1) When i attempt to restore the archived, I get an error to provide
password and I am confused what to do here.

2) is it correct to do xzcat and then pipe that to pgsl? I want to
restore on mymachine.com <http://mymachine.com&gt;

3) on the target machine, I ran initdb and by default it creates
postgres. Will that be a problem when restoring?

On this machine I am connected as root.

xzcat dump_xz |/opt/PostgreSQL/9.4/bin/psql -h mymachine.com
<http://mymachine.com&gt; postgres
Password:

Thank you!

On Sat, Oct 17, 2015 at 7:42 AM, Francisco Olarte
<folarte@peoplecall.com <mailto:folarte@peoplecall.com>> wrote:

Hi Anj:

On Sat, Oct 17, 2015 at 3:11 AM, anj patnaik <patna73@gmail.com
<mailto:patna73@gmail.com>> wrote:

My question is for Francisco who replied regarding xz. I was curious what
options he used. Thanks.

1st, we do not normally top post on this list.

Second, I do not remember the exact options I use. I can look
them up,
but they are going to be surely useless ( they are for a custom
process with does several things with files, it uses gzip ( in Fc
backups ), plus xz ( for some files which need to be kept for a long
time and are nearly never needed ), plus lzo ( as I found lzo
compressed temporary files were faster than uncompressed ones ),
and a
lot of code. But in the development process we did a full comparison
of several compressor, and found what I stated with bzip2, it was
surpassed in every combination of options by xz ( plain bzip2, plain
xz ).

Francisco Olarte.

--
Adrian Klaver
adrian.klaver@aklaver.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general