does pg_dump get delayed if I have a loop that does continuous insertions
I am running pg_dump on a database while on another machine running a loop
to do insertions.
Does pg_dump wait for idle activity before it completes or how does it
determine when it has all the records needed for archiving?
I am using the compressed mode and also using xz.
Thanks,
ap
On 10/27/2015 04:10 PM, anj patnaik wrote:
I am running pg_dump on a database while on another machine running a
loop to do insertions.Does pg_dump wait for idle activity before it completes or how does it
determine when it has all the records needed for archiving?
http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
"pg_dump is a utility for backing up a PostgreSQL database. It makes
consistent backups even if the database is being used concurrently.
pg_dump does not block other users accessing the database (readers or
writers).
I am using the compressed mode and also using xz.
Again, why?
Thanks,
ap
--
Adrian Klaver
adrian.klaver@aklaver.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
I was trying to achieve smallest file possible so tried the xz. Right now
the db size returned from SELECT
pg_size_pretty(pg_database_size('postgres') is 1.4 GB and the size of the
dump with xz is 2.2 GB.
Is there a limit to the size of the database that pg_dump will run on? Will
it work when db is in TBs?
Also, I want to know if anyone has found any handy cron scripts for
automated backups to run on a daily/weekly basis? i found some on google,
but interested to know if there are better ones.
Thanks,
ap
On Wed, Oct 28, 2015 at 12:05 AM, Adrian Klaver <adrian.klaver@aklaver.com>
wrote:
Show quoted text
On 10/27/2015 04:10 PM, anj patnaik wrote:
I am running pg_dump on a database while on another machine running a
loop to do insertions.Does pg_dump wait for idle activity before it completes or how does it
determine when it has all the records needed for archiving?http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
"pg_dump is a utility for backing up a PostgreSQL database. It makes
consistent backups even if the database is being used concurrently. pg_dump
does not block other users accessing the database (readers or writers).I am using the compressed mode and also using xz.
Again, why?
Thanks,
ap--
Adrian Klaver
adrian.klaver@aklaver.com
On 10/28/2015 01:09 PM, anj patnaik wrote:
I was trying to achieve smallest file possible so tried the xz. Right
now the db size returned from SELECT
pg_size_pretty(pg_database_size('postgres') is 1.4 GB and the size of
the dump with xz is 2.2 GB.
Probably not a good idea to put your data in the postgres database. This
is one of the system databases created by initdb and is meant to be used
as a default database to connect to, not really as one to store data in.
Just did a test with pg_dumpall and it is not dumped when you backup the
entire cluster.
Also hard to compare the numbers without seeing the command that
produced the database dump. An example from here on a small database:
test=> select pg_size_pretty(pg_database_size('production'));
pg_size_pretty
----------------
53 MB
aklaver@panda:~> pg_dump -Fc -d production -U postgres -f prod.out
aklaver@panda:~> l -h prod.out
-rw-r--r-- 1 aklaver users 5.4M Oct 28 16:44 prod.out
Is there a limit to the size of the database that pg_dump will run on?
Will it work when db is in TBs?Also, I want to know if anyone has found any handy cron scripts for
automated backups to run on a daily/weekly basis? i found some on
google, but interested to know if there are better ones.Thanks,
apOn Wed, Oct 28, 2015 at 12:05 AM, Adrian Klaver
<adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>> wrote:On 10/27/2015 04:10 PM, anj patnaik wrote:
I am running pg_dump on a database while on another machine
running a
loop to do insertions.Does pg_dump wait for idle activity before it completes or how
does it
determine when it has all the records needed for archiving?http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
"pg_dump is a utility for backing up a PostgreSQL database. It makes
consistent backups even if the database is being used concurrently.
pg_dump does not block other users accessing the database (readers
or writers).I am using the compressed mode and also using xz.
Again, why?
Thanks,
ap--
Adrian Klaver
adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>
--
Adrian Klaver
adrian.klaver@aklaver.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 2015-10-28 14:09, anj patnaik wrote:
Also, I want to know if anyone has found any handy cron scripts for automated
backups to run on a daily/weekly basis? i found some on google, but interested
to know if there are better ones.
It does a lot more but:
https://github.com/wal-e/wal-e
--
http://yves.zioup.com
gpg: 4096R/32B0F416
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Probably not a good idea to put your data in the postgres database. This
is one of the system databases created by >initdb and is meant to be used
as a default database to connect to, not really as one to store data in.
Just did a test >with pg_dumpall and it is not dumped when you backup the
entire cluster.
This statement is not entirely accurate. Although I agree it is a good idea
to have a separate named database for each "application"pg_dumpall does not
add a "CREATE DATABASE postgres;" to the dump file because pg_restore
expects that the postgres database already exists. However, all objects in
the postgres database are dumped and will be restored. This can be verified
because you can do a "grep 'connect postgres' <your_pg_dump_file" and it
will be found, as are any object in the postgres database..
On Wed, Oct 28, 2015 at 7:57 PM, Yves Dorfsman <yves@zioup.com> wrote:
On 2015-10-28 14:09, anj patnaik wrote:
Also, I want to know if anyone has found any handy cron scripts for
automated
backups to run on a daily/weekly basis? i found some on google, but
interested
to know if there are better ones.
It does a lot more but:
https://github.com/wal-e/wal-e--
http://yves.zioup.com
gpg: 4096R/32B0F416--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
--
*Melvin Davidson*
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.
On 10/29/2015 10:54 AM, Melvin Davidson wrote:
Probably not a good idea to put your data in the postgres database.
This is one of the system databases created by >initdb and is meant to
be used as a default database to connect to, not really as one to store
data in. Just did a test >with pg_dumpall and it is not dumped when you
backup the entire cluster.This statement is not entirely accurate. Although I agree it is a good
idea to have a separate named database for each "application"pg_dumpall
does not add a "CREATE DATABASE postgres;" to the dump file because
pg_restore expects that the postgres database already exists. However,
all objects in the postgres database are dumped and will be restored.
This can be verified because you can do a "grep 'connect postgres'
<your_pg_dump_file" and it will be found, as are any object in the
postgres database..
You are correct. I did not dig deep enough into the dump file.
On Wed, Oct 28, 2015 at 7:57 PM, Yves Dorfsman <yves@zioup.com
<mailto:yves@zioup.com>> wrote:On 2015-10-28 14:09, anj patnaik wrote:
Also, I want to know if anyone has found any handy cron scripts for automated
backups to run on a daily/weekly basis? i found some on google, but interested
to know if there are better ones.It does a lot more but:
https://github.com/wal-e/wal-e--
http://yves.zioup.com
gpg: 4096R/32B0F416--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
<mailto:pgsql-general@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general--
*Melvin Davidson*
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.
--
Adrian Klaver
adrian.klaver@aklaver.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
2015-10-28 21:09 GMT+01:00 anj patnaik <patna73@gmail.com>:
I was trying to achieve smallest file possible so tried the xz. Right now
the db size returned from SELECT
pg_size_pretty(pg_database_size('postgres') is 1.4 GB and the size of the
dump with xz is 2.2 GB.Is there a limit to the size of the database that pg_dump will run on?
No limit..
Will it work when db is in TBs?
... though this would big a rather bad idea. It will work, but it will be
damn slow. And its restoration will last way too much. Much better to do
PITR backups with databases bigger than 100GB.
Also, I want to know if anyone has found any handy cron scripts for
automated backups to run on a daily/weekly basis? i found some on google,
but interested to know if there are better ones.
I really like pg_back (though, as a disclaimer, I should say this is
written by one of my co-workers, Nicolas Thauvin). Anyway, it's on
https://github.com/orgrim/pg_back if you want to take a look.
--
Guillaume.
http://blog.guillaume.lelarge.info
http://www.dalibo.com