select count() out of memory

Started by Thomas Finneidover 18 years ago51 messagesgeneral
Jump to latest
#1Thomas Finneid
tfinneid@student.matnat.uio.no

Hi

I am volume testing a db model that consists of a paritioned tables. The
db has been running for a week and a half now and has built up to contain
approx 55000 partition tables of 18000 rows each. The root table therefore
contains about 1 billion rows. When I try to do a "select count(*)" of the
root table, it does some work for a while, perhaps 5-10 minutes and the
aborts with

ERROR: out of memory
DETAIL: Failed on request of size 130.

Does anybody have any suggestion as to which parameter I should tune to
give it more memory to be able to perform queries on the root table?

regards

thomas

The last parts of the db log is the following, I dont think anything other
than the last 2 lines are relevant.

pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 256 free (0 chunks); 768
used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks); 768
used
MdSmgr: 4186112 total in 9 blocks; 911096 free (4 chunks); 3275016 used
LockTable (locallock hash): 2088960 total in 8 blocks; 418784 free (25
chunks); 1670176 used
Timezones: 47592 total in 2 blocks; 5968 free (0 chunks); 41624 used
ErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
ERROR: out of memory
DETAIL: Failed on request of size 130.

#2Bruce Momjian
bruce@momjian.us
In reply to: Thomas Finneid (#1)
Re: select count() out of memory

<tfinneid@student.matnat.uio.no> writes:

ERROR: out of memory
DETAIL: Failed on request of size 130.

Does anybody have any suggestion as to which parameter I should tune to
give it more memory to be able to perform queries on the root table?

This indicates that malloc() failed which means the system couldn't provide
any more memory. Either you have a very low memory ulimit (look at ulimit -a
in the same session as Postgres) or your machine is really low on memory.
Perhaps you have shared_buffers set very high or some other program is using
all your available memory (and swap)?

The last parts of the db log is the following, I dont think anything other
than the last 2 lines are relevant.

You're wrong. All the lines like:

pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used

are a dump of Postgres's current memory allocations and could be useful in
showing if there's a memory leak causing this.

Also, what version of Postgres is this?

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com

#3Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Bruce Momjian (#2)
Re: select count() out of memory

Hi

I have tried to answer to the best of my knowledge but its running on
Soalris 10, and I am not that familiar with solaris ( Go Linux!!! :)

any more memory. Either you have a very low memory ulimit (look at ulimit
-a
in the same session as Postgres) or your machine is really low on memory.
Perhaps you have shared_buffers set very high or some other program is
using
all your available memory (and swap)?

the machine has 32GB RAM, I dont know how much swap it has, but I do know
the disk system is a disk cluster with 16x450GB disks, it probably has a
local disk as well but I dont know how big it is.

-bash-3.00$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 10
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 16357
virtual memory (kbytes, -v) unlimited

this is my config

checkpoint_segments = 96
effective_cache_size = 128000
shared_buffers = 430000
max_fsm_pages = 208000
max_fsm_relations = 10000

max_connections = 1000

autovacuum = off # enable autovacuum subprocess?

fsync = on # turns forced synchronization on
or off
#full_page_writes = on # recover from partial page writes
wal_sync_method = fdatasync
wal_buffers = 256

commit_delay = 5
#commit_siblings = 5 # range 1-1000

Also, what version of Postgres is this?

Apparently its 8.1.8, I thought it was 8.2

are a dump of Postgres's current memory allocations and could be useful in
showing if there's a memory leak causing this.

The file is 20M, these are the last lines: (the first line continues
unttill ff_26000)

idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_4_value2: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_4_value1: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_4_trace_id: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_3_value7: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_3_value2: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_3_value1: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_3_trace_id: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_2_value7: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_2_value2: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_2_value1: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_2_trace_id: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_1_value7: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_1_value2: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_1_value1: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_1_trace_id: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
pg_index_indrelid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_namespace_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_type_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_proc_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 256 free (0
chunks); 768 used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_namespace_nspname_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_authid_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_operator_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_class_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 256 free (0 chunks); 768
used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks); 768
used
MdSmgr: 4186112 total in 9 blocks; 911096 free (4 chunks); 3275016 used
LockTable (locallock hash): 2088960 total in 8 blocks; 418784 free (25
chunks); 1670176 used
Timezones: 47592 total in 2 blocks; 5968 free (0 chunks); 41624 used
ErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
ERROR: out of memory
DETAIL: Failed on request of size 130.

#4Bruce Momjian
bruce@momjian.us
In reply to: Thomas Finneid (#3)
Re: select count() out of memory

<tfinneid@student.matnat.uio.no> writes:

max_connections = 1000

Do you actually have anywhere near this number of processes? What is your
setting for work_mem? Keep in mind every process could use as much as work_mem
and actually it's possible to use that much several times over.

Also, what is your maintenance_work_mem and do you have many vacuums or other
such commands running at the time?

1,000 processes is a large number of processes. You may be better off
re-architecting to run fewer processes simultaneously. But if that's not
possible you'll have to keep it in mind to tune other things properly.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com

#5Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Bruce Momjian (#4)
Re: select count() out of memory

<tfinneid@student.matnat.uio.no> writes:

max_connections = 1000

Do you actually have anywhere near this number of processes? What is your
setting for work_mem? Keep in mind every process could use as much as
work_mem
and actually it's possible to use that much several times over.

Also, what is your maintenance_work_mem and do you have many vacuums or
other
such commands running at the time?

1,000 processes is a large number of processes. You may be better off
re-architecting to run fewer processes simultaneously. But if that's not
possible you'll have to keep it in mind to tune other things properly.

The application only needs about 20 connections under normal situations,
but might need up to 100 in some situations, f.ex. if there is much
latency and new connections arrive before another is finished.

I could certainly reduce the number to 100 or 50, but do you think that
would help with this problem.

regards

thomas

#6Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Thomas Finneid (#3)
Re: select count() out of memory

tfinneid@student.matnat.uio.no wrote:

are a dump of Postgres's current memory allocations and could be useful in
showing if there's a memory leak causing this.

The file is 20M, these are the last lines: (the first line continues
unttill ff_26000)

idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free (0
chunks); 632 used

You have 26000 partitions???

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#7Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Alvaro Herrera (#6)
Re: select count() out of memory

tfinneid@student.matnat.uio.no wrote:

are a dump of Postgres's current memory allocations and could be

useful in

showing if there's a memory leak causing this.

The file is 20M, these are the last lines: (the first line continues
unttill ff_26000)

idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free (0
chunks); 632 used

You have 26000 partitions???

At the moment the db has 55000 partitions, and thats only a fifth of the
complete volume the system will have in production. The reason I chose
this solution is that a partition will be loaded with new data every 3-30
seconds, and all that will be read by up to 15 readers every time new data
is available. The data will be approx 2-4TB in production in total. So it
will be too slow if I put it in a single table with permanent indexes.

I did a test previously, where I created 1 million partitions (without
data) and I checked the limits of pg, so I think it should be ok.

thomas

#8Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Thomas Finneid (#7)
Re: select count() out of memory

tfinneid@student.matnat.uio.no wrote:

tfinneid@student.matnat.uio.no wrote:

are a dump of Postgres's current memory allocations and could be

useful in

showing if there's a memory leak causing this.

The file is 20M, these are the last lines: (the first line continues
unttill ff_26000)

idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free (0
chunks); 632 used

You have 26000 partitions???

At the moment the db has 55000 partitions, and thats only a fifth of the
complete volume the system will have in production. The reason I chose
this solution is that a partition will be loaded with new data every 3-30
seconds, and all that will be read by up to 15 readers every time new data
is available. The data will be approx 2-4TB in production in total. So it
will be too slow if I put it in a single table with permanent indexes.

I did a test previously, where I created 1 million partitions (without
data) and I checked the limits of pg, so I think it should be ok.

Clearly it's not. The difference could be the memory usage and wastage
for all those relcache entries and other stuff. I would reduce the
number of partitions to a more reasonable value (within the tens, most
likely)

Maybe your particular problem can be solved by raising
max_locks_per_transaction (?) but I wouldn't count on it.

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

#9Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#8)
Re: select count() out of memory

Alvaro Herrera <alvherre@commandprompt.com> writes:

tfinneid@student.matnat.uio.no wrote:

I did a test previously, where I created 1 million partitions (without
data) and I checked the limits of pg, so I think it should be ok.

Clearly it's not.

You couldn't have tested it too much --- even planning a query over so
many tables would take forever, and actually executing it would surely
have run the system out of locktable space before it even started
scanning.

The partitioning facility is designed for partition counts in the tens,
or maybe hundreds at the most.

regards, tom lane

#10Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Alvaro Herrera (#8)
Re: select count() out of memory

tfinneid@student.matnat.uio.no wrote:

tfinneid@student.matnat.uio.no wrote:

are a dump of Postgres's current memory allocations and could be

useful in

showing if there's a memory leak causing this.

The file is 20M, these are the last lines: (the first line continues
unttill ff_26000)

idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free

(0

chunks); 632 used

You have 26000 partitions???

At the moment the db has 55000 partitions, and thats only a fifth of the
complete volume the system will have in production. The reason I chose
this solution is that a partition will be loaded with new data every
3-30
seconds, and all that will be read by up to 15 readers every time new
data
is available. The data will be approx 2-4TB in production in total. So
it
will be too slow if I put it in a single table with permanent indexes.

I did a test previously, where I created 1 million partitions (without
data) and I checked the limits of pg, so I think it should be ok.

Clearly it's not.

I does not mean my problem has anything to do with the number of
partitions. It might have, or it might not, and thats the problem, the
cause has not been located yet.

According to the documented limits of pg,
The difference could be the memory usage and wastage

for all those relcache entries and other stuff. I would reduce the
number of partitions to a more reasonable value (within the tens, most
likely)

The db worked fine until it reached perhaps 30-40 thousand partitions.

#11Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Tom Lane (#9)
Re: select count() out of memory

Alvaro Herrera <alvherre@commandprompt.com> writes:

tfinneid@student.matnat.uio.no wrote:

I did a test previously, where I created 1 million partitions (without
data) and I checked the limits of pg, so I think it should be ok.

Clearly it's not.

You couldn't have tested it too much --- even planning a query over so
many tables would take forever, and actually executing it would surely
have run the system out of locktable space before it even started
scanning.

And this is the testing, so you're right....

Its only the select on the root table that fails. Operations on a single
partitions is no problem.

The partitioning facility is designed for partition counts in the tens,
or maybe hundreds at the most.

Maybe, but it works even on 55000 partitions as long as the operations are
done against a partition and not the root table.

#12Scott Marlowe
scott.marlowe@gmail.com
In reply to: Tom Lane (#9)
Re: select count() out of memory

On 10/25/07, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Alvaro Herrera <alvherre@commandprompt.com> writes:

tfinneid@student.matnat.uio.no wrote:

I did a test previously, where I created 1 million partitions (without
data) and I checked the limits of pg, so I think it should be ok.

Clearly it's not.

You couldn't have tested it too much --- even planning a query over so
many tables would take forever, and actually executing it would surely
have run the system out of locktable space before it even started
scanning.

The partitioning facility is designed for partition counts in the tens,
or maybe hundreds at the most.

I've had good results well into the hundreds, but after about 400 or
so, things start to get a bit wonky.

#13Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Thomas Finneid (#11)
Re: select count() out of memory

tfinneid@student.matnat.uio.no wrote:

Alvaro Herrera <alvherre@commandprompt.com> writes:

The partitioning facility is designed for partition counts in the tens,
or maybe hundreds at the most.

Maybe, but it works even on 55000 partitions as long as the operations are
done against a partition and not the root table.

It will work on a million partitions and more, provided you do
operations on single partitions.

What you want to do is not possible, period. Maybe when we redesign
partitioning, but that's far into the future. Kindly do not waste our
time (nor yours).

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

#14Erik Jones
erik@myemma.com
In reply to: Thomas Finneid (#10)
Re: select count() out of memory

On Oct 25, 2007, at 9:36 AM, tfinneid@student.matnat.uio.no wrote:

tfinneid@student.matnat.uio.no wrote:

tfinneid@student.matnat.uio.no wrote:

are a dump of Postgres's current memory allocations and could be

useful in

showing if there's a memory leak causing this.

The file is 20M, these are the last lines: (the first line
continues
unttill ff_26000)

idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks;
392 free

(0

chunks); 632 used

You have 26000 partitions???

At the moment the db has 55000 partitions, and thats only a fifth
of the
complete volume the system will have in production. The reason I
chose
this solution is that a partition will be loaded with new data every
3-30
seconds, and all that will be read by up to 15 readers every time
new
data
is available. The data will be approx 2-4TB in production in
total. So
it
will be too slow if I put it in a single table with permanent
indexes.

I did a test previously, where I created 1 million partitions
(without
data) and I checked the limits of pg, so I think it should be ok.

Clearly it's not.

I does not mean my problem has anything to do with the number of
partitions. It might have, or it might not, and thats the problem, the
cause has not been located yet.

According to the documented limits of pg,
The difference could be the memory usage and wastage

for all those relcache entries and other stuff. I would reduce the
number of partitions to a more reasonable value (within the tens,
most
likely)

The db worked fine until it reached perhaps 30-40 thousand partitions.

It depends on how you have the partitions set up and how you're
accessing them. Are all of these partitions under the same parent
table? If so, then trying run a SELECT COUNT(*) against the parent
table is simply insane. Think about it, you're asking one query to
scan 55000 tables. What you need to do is partition based on your
access patterns, not what you *think* will help with performance down
the road. Look into constraint exclusion, whether or not you can
just access child tables directly, and whether you really need all of
these under one logical table. Also, no matter how you do the
partitioning, once you get up to that many and more relations in your
system, dumps and restores take a lot longer.

Erik Jones
Software Developer | Emma®
erik@myemma.com
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com

#15Scott Marlowe
scott.marlowe@gmail.com
In reply to: Thomas Finneid (#11)
Re: select count() out of memory

On 10/25/07, tfinneid@student.matnat.uio.no
<tfinneid@student.matnat.uio.no> wrote:

Alvaro Herrera <alvherre@commandprompt.com> writes:

tfinneid@student.matnat.uio.no wrote:

I did a test previously, where I created 1 million partitions (without
data) and I checked the limits of pg, so I think it should be ok.

Clearly it's not.

You couldn't have tested it too much --- even planning a query over so
many tables would take forever, and actually executing it would surely
have run the system out of locktable space before it even started
scanning.

And this is the testing, so you're right....

Its only the select on the root table that fails. Operations on a single
partitions is no problem.

Not sure I understand exactly what you're saying.

Are you selecting directly from the child table, or from the parent
table with constraint_exclusion turned on?

If you're hitting the child table directly, you aren't actually using
partitioning. It's a wholly independent table at that point.

If you're hitting a single child table through the parent table via
constraint_exclusion, then you are using partitioning, but only
hitting on physical table.

But hitting the parent table with no constraining where clause is a
recipe for disaster. The very reason to use partitioning is so that
you never have to scan through a single giant table.

Anyway, you're heading off into new territory with 55,000 partitions.
What is the average size, in MB of one of your partitions? I found
with my test, there was a point of diminishing returns after 400 or so
partitions at which point indexes were no longer needed, because the
average query just seq scanned the partitions it needed, and they were
all ~ 16 or 32 Megs.

#16Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Alvaro Herrera (#13)
Re: select count() out of memory

tfinneid@student.matnat.uio.no wrote:
It will work on a million partitions and more, provided you do
operations on single partitions.

Thats good enough for me, thats exactly what I want. I just used the
select count() on the root to get a feeling of how many rows it was in
total. An then I thought that the error message was just a configuration
issue. But since doing operations like that on the the root table of this
magnitude is not a good idea, I won't.

What you want to do is not possible, period. Maybe when we redesign
partitioning, but that's far into the future. Kindly do not waste our
time (nor yours).

Thank you for that prompt reply.

In all fairness, thats why I asked the question here, to find out the
facts, not to be abused for being ignorant about pg.

thomas

#17Scott Marlowe
scott.marlowe@gmail.com
In reply to: Thomas Finneid (#1)
Re: select count() out of memory

On 10/25/07, tfinneid@student.matnat.uio.no
<tfinneid@student.matnat.uio.no> wrote:

Hi

I am volume testing a db model that consists of a paritioned tables. The
db has been running for a week and a half now and has built up to contain
approx 55000 partition tables of 18000 rows each. The root table therefore
contains about 1 billion rows. When I try to do a "select count(*)" of the
root table, it does some work for a while, perhaps 5-10 minutes and the
aborts with

ERROR: out of memory
DETAIL: Failed on request of size 130.

So, out of curiosity, I asked my Oracle DBA friend if she'd ever heard
of anyone having 60,000 or so partitions in a table, and she looked at
me like I had a third eye in my forehead and said in her sweet voice
"Well, that would certainly be an edge case". She sounded like she
was worried about me.

#18Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Thomas Finneid (#16)
Re: select count() out of memory

tfinneid@student.matnat.uio.no wrote:

tfinneid@student.matnat.uio.no wrote:
It will work on a million partitions and more, provided you do
operations on single partitions.

Thats good enough for me, thats exactly what I want.

In that case, why use partitions at all? They are simple independent
tables.

--
Alvaro Herrera http://www.PlanetPostgreSQL.org/
"El destino baraja y nosotros jugamos" (A. Schopenhauer)

#19Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Scott Marlowe (#17)
Re: select count() out of memory

Scott Marlowe escribi�:

So, out of curiosity, I asked my Oracle DBA friend if she'd ever heard
of anyone having 60,000 or so partitions in a table, and she looked at
me like I had a third eye in my forehead and said in her sweet voice
"Well, that would certainly be an edge case". She sounded like she
was worried about me.

Did you get rid of that third eye already? I would be equally worried.

--
Alvaro Herrera http://www.advogato.org/person/alvherre
"Someone said that it is at least an order of magnitude more work to do
production software than a prototype. I think he is wrong by at least
an order of magnitude." (Brian Kernighan)

#20Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Erik Jones (#14)
Re: select count() out of memory

The db worked fine until it reached perhaps 30-40 thousand partitions.

It depends on how you have the partitions set up and how you're
accessing them. Are all of these partitions under the same parent
table? If so, then trying run a SELECT COUNT(*) against the parent
table is simply insane. Think about it, you're asking one query to
scan 55000 tables. What you need to do is partition based on your
access patterns, not what you *think* will help with performance down
the road. Look into constraint exclusion, whether or not you can
just access child tables directly, and whether you really need all of
these under one logical table. Also, no matter how you do the
partitioning, once you get up to that many and more relations in your
system, dumps and restores take a lot longer.

The design is based on access patterns, i.e. one partition represents a
group of data along a discrete axis, so the partitions are the perfect for
modeling that. Only the last partition will be used on normal cases. The
previous partitions only need to exists until the operator deletes them,
which will be sometime between 1-6 weeks.

Regarding dumps and restore; the system will always be offline during
those operations and it will be so for several days, because a new project
might start at another location in the world, so the travelling there
takes time. In the mean time, all admin tasks can be performed without
problems, even backup operations that take 3 days.

regards

thomas
thomas

#21Erik Jones
erik@myemma.com
In reply to: Thomas Finneid (#20)
#22Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Scott Marlowe (#15)
#23Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Scott Marlowe (#17)
#24Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Alvaro Herrera (#18)
#25Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Erik Jones (#21)
#26Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Thomas Finneid (#24)
#27Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Finneid (#24)
#28Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Alvaro Herrera (#26)
#29Erik Jones
erik@myemma.com
In reply to: Alvaro Herrera (#26)
#30Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Tom Lane (#27)
#31Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Thomas Finneid (#3)
#32Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Finneid (#30)
#33Steve Crawford
scrawford@pinpointresearch.com
In reply to: Alvaro Herrera (#26)
#34Scott Marlowe
scott.marlowe@gmail.com
In reply to: Steve Crawford (#33)
#35Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Tom Lane (#32)
#36Scott Marlowe
scott.marlowe@gmail.com
In reply to: Thomas Finneid (#35)
#37Bruce Momjian
bruce@momjian.us
In reply to: Thomas Finneid (#35)
#38Jorge Godoy
jgodoy@gmail.com
In reply to: Thomas Finneid (#20)
#39Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Bruce Momjian (#37)
#40Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Scott Marlowe (#36)
#41Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Jorge Godoy (#38)
#42Sam Mason
sam@samason.me.uk
In reply to: Thomas Finneid (#39)
#43Sam Mason
sam@samason.me.uk
In reply to: Thomas Finneid (#40)
#44Bruce Momjian
bruce@momjian.us
In reply to: Sam Mason (#42)
#45Sam Mason
sam@samason.me.uk
In reply to: Bruce Momjian (#44)
#46Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Bruce Momjian (#44)
#47Paul Boddie
paul@boddie.org.uk
In reply to: Thomas Finneid (#20)
#48Adrian Klaver
adrian.klaver@aklaver.com
In reply to: Thomas Finneid (#46)
#49Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Adrian Klaver (#48)
#50Adrian Klaver
adrian.klaver@aklaver.com
In reply to: Thomas Finneid (#49)
#51Thomas Finneid
tfinneid@student.matnat.uio.no
In reply to: Adrian Klaver (#50)