Use cases

Started by Mark Woodwardalmost 20 years ago3 messages
#1Mark Woodward
pgsql@mohawksoft.com

I think we've talked about this a couple times over the years, but I'm not
sure it was resolved or not.

The message post about load testing and SQLite showed PostgreSQL poorly.
Yea, I know, it was the Windows port not being optimized, I can see that,
but it raises something else. A good set of baselines for people to build
from.

Use cases: What you you intend to use PostgreSQL for.

Maybe it is a set of example postgresql.conf files that have various uses,
maybe it is a set of documtation, maybe a set of custom analyzer queries,
and more likely a combination of all of these.

One of the things that makes this interesting is that the staated goals of
the use cases and the actual settings will be exposed to a wider audience
and hence have a better chance of being further refined by people with
more experience with different aspects of postgresql.

We could provide whole postgresql.conf files for specific hardware
targets, sql scripst to alter settings, and maybe custom analyzers.

For instance:
Data loading: a configuration that will restore a pg_dump file or accept
data loading very fast.

Small footprint, limited resources.

Large server, large tables.

Large server, many small databases

Giant server, many large tables, possily many databases.

High speed static server, "balls to the wall" speed, super optimized for
query speed.

High speed dynamic server, fast as possible read/write.

Any other suggestions?

#2Andrew Dunstan
andrew@dunslane.net
In reply to: Mark Woodward (#1)
Re: Use cases

In case you missed it:

In 8.2 the settings initdb makes as a default for shared_buffers and
max_fsm_pages will be significantly higher if the machine can stand it.
This should have some good performance impact on the "out of the box"
configuration.

Frankly - supplying more sample configs is likely to be fairly
fruitless. A much better thing would be a really good tuning tool that
would take stats and logs and other stuff from a running server and
suggest improvements (e.g. add an index on fields (foo,bar) on baz, try
doubling work_mem, increase stats buckets on blurfl ...)

The "benchmark" referred to is so full of holes it's hard to know where
to start.

Database benchmarks are things that many years of study have gone into -
this sort of homegrown effort is rather like a backyard attempt to
construct a Maserati. The lack of any testing of concurrency is very
telling.

cheers

andrew

Mark Woodward wrote:

Show quoted text

I think we've talked about this a couple times over the years, but I'm not
sure it was resolved or not.

The message post about load testing and SQLite showed PostgreSQL poorly.
Yea, I know, it was the Windows port not being optimized, I can see that,
but it raises something else. A good set of baselines for people to build
from.

Use cases: What you you intend to use PostgreSQL for.

Maybe it is a set of example postgresql.conf files that have various uses,
maybe it is a set of documtation, maybe a set of custom analyzer queries,
and more likely a combination of all of these.

One of the things that makes this interesting is that the staated goals of
the use cases and the actual settings will be exposed to a wider audience
and hence have a better chance of being further refined by people with
more experience with different aspects of postgresql.

We could provide whole postgresql.conf files for specific hardware
targets, sql scripst to alter settings, and maybe custom analyzers.

For instance:
Data loading: a configuration that will restore a pg_dump file or accept
data loading very fast.

Small footprint, limited resources.

Large server, large tables.

Large server, many small databases

Giant server, many large tables, possily many databases.

High speed static server, "balls to the wall" speed, super optimized for
query speed.

High speed dynamic server, fast as possible read/write.

Any other suggestions?

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

#3Jim C. Nasby
jnasby@pervasive.com
In reply to: Andrew Dunstan (#2)
Re: Use cases

On Sun, Feb 12, 2006 at 06:29:21PM -0500, Andrew Dunstan wrote:

Frankly - supplying more sample configs is likely to be fairly
fruitless. A much better thing would be a really good tuning tool that
would take stats and logs and other stuff from a running server and
suggest improvements (e.g. add an index on fields (foo,bar) on baz, try
doubling work_mem, increase stats buckets on blurfl ...)

I disagree. Many people have gotten used to the idea of having multiple
config files to choose from, thanks to MySQL.

Database benchmarks are things that many years of study have gone into -
this sort of homegrown effort is rather like a backyard attempt to
construct a Maserati. The lack of any testing of concurrency is very
telling.

True, but in this case the lack of any kind of a reasonable config was a
much bigger issue. Except for test 8, the numbers improved once he made
a few config tweaks that I suggested (see the email thread I posted
about a day or two ago for details).
--
Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461