ignore tablespace in schema definition queries
Hello all,
I have a big set of migration queries (that I do not control) that I must
run on my automatic test database, in order to set ip up and run tests.
These queries create all sorts of things like indexes, tables, and so. But
they also include the specification of the tablespace they expect to use
(tablespace_000, tablespace_001, up to tablespace_999). This would require
me to setup hundreds of tablespaces before I can start the migration
process, and run the tests.
Is there a way to tell the postgres server to ignore that part and just use
some default tablespace? My present situation is that I can not bring the
test database to a usable point because many migration queries fail due to
the tablespace they need has not been created. My problem is that I would
like to avoid creating them.
Thanks
João
On Sat, Apr 3, 2021 at 1:59 PM Joao Miguel Ferreira <
joao.miguel.c.ferreira@gmail.com> wrote:
Hello all,
I have a big set of migration queries (that I do not control) that I must
run on my automatic test database, in order to set ip up and run tests.
These queries create all sorts of things like indexes, tables, and so. But
they also include the specification of the tablespace they expect to use
(tablespace_000, tablespace_001, up to tablespace_999). This would require
me to setup hundreds of tablespaces before I can start the migration
process, and run the tests.Is there a way to tell the postgres server to ignore that part and just
use some default tablespace? My present situation is that I can not bring
the test database to a usable point because many migration queries fail due
to the tablespace they need has not been created. My problem is that I
would like to avoid creating them.Thanks
João
Hi João,
Below are some thoughts, not a solution.
Since the migration queries seem to be part of a procedural process of your
activities, it may seem advisable to run them as they are so as to avoid
probable complications later on.
There may be some good reason the migration queries are constructing
tablespaces and also provides the opportunity to specify the directory file
where the data would be stored.
The construction of the 1000 tablespaces (tablespace_000 till
tablespace_999) could be done using a procedural language such as plpgsql
and generate_series or "common" programming languages such as python via
psycopg2.
Here you could construct the tablespaces before you run the migration
queries.
Allan.
The solution depends on how you are creating the tables.
For example: the pg_restore has option —-no-tablespaces. With this option,
all objects will be created in whichever tablespace is the default during
restore. The pg_dump has similar.
If you are running CREATE TABLE statements that have hard-coded
tablespaces, then maybe pass your scripts through the sed or awk utility to
replace the name with pg_default just prior to execution.
Or, go ahead and create all possible tablespaces before running the CREATE
TABLE statements since each tablespace is just metadata not files like in
Oracle or SQL Server.
On Sat, Apr 3, 2021 at 6:59 AM Joao Miguel Ferreira <
joao.miguel.c.ferreira@gmail.com> wrote:
Hello all,
I have a big set of migration queries (that I do not control) that I must
run on my automatic test database, in order to set ip up and run tests.
These queries create all sorts of things like indexes, tables, and so. But
they also include the specification of the tablespace they expect to use
(tablespace_000, tablespace_001, up to tablespace_999). This would require
me to setup hundreds of tablespaces before I can start the migration
process, and run the tests.Is there a way to tell the postgres server to ignore that part and just
use some default tablespace? My present situation is that I can not bring
the test database to a usable point because many migration queries fail due
to the tablespace they need has not been created. My problem is that I
would like to avoid creating them.Thanks
João--
- Mark
Hi Allan and Mark,
Thank you for your thoughts.
I ended up writing a 1000 lines SQL file with the SQL commands for creation
of the 1000 tablespaces. I keep the file with my bootstrap scripts and I'm
loading with the psql client.
It works just fine and it is quite very fast. I was a bit afraid it would
be a time consuming thing, but it's not, really. Cool!
The Pg backend is actually running from Docker, which is very handy for my
scenario.
Thank you
Cheers
Joao
On Sat, Apr 3, 2021 at 2:26 PM Mark Johnson <remi9898@gmail.com> wrote:
Show quoted text
The solution depends on how you are creating the tables.
For example: the pg_restore has option —-no-tablespaces. With this
option, all objects will be created in whichever tablespace is the default
during restore. The pg_dump has similar.If you are running CREATE TABLE statements that have hard-coded
tablespaces, then maybe pass your scripts through the sed or awk utility to
replace the name with pg_default just prior to execution.Or, go ahead and create all possible tablespaces before running the CREATE
TABLE statements since each tablespace is just metadata not files like in
Oracle or SQL Server.On Sat, Apr 3, 2021 at 6:59 AM Joao Miguel Ferreira <
joao.miguel.c.ferreira@gmail.com> wrote:Hello all,
I have a big set of migration queries (that I do not control) that I must
run on my automatic test database, in order to set ip up and run tests.
These queries create all sorts of things like indexes, tables, and so. But
they also include the specification of the tablespace they expect to use
(tablespace_000, tablespace_001, up to tablespace_999). This would require
me to setup hundreds of tablespaces before I can start the migration
process, and run the tests.Is there a way to tell the postgres server to ignore that part and just
use some default tablespace? My present situation is that I can not bring
the test database to a usable point because many migration queries fail due
to the tablespace they need has not been created. My problem is that I
would like to avoid creating them.Thanks
João--
- Mark