PREPARE/EXECUTE across backends?
Hi,
From the document, it seems that PREPARE/EXECUTE works only in the same
session. I am wondering whether postgres can prepare a query (save the plan)
for difference backends.
I am working on a project which requires executing "psql -c 'query'" in
command line multiple times. Since the performance is critical, it would be
nice to prepare the same query first to avoid being parsed/optimized each
time. But psql opens a new backend each time, it looks like that
PREPARE/EXECUTE doesn't work. Is there any workaround?
Thanks
_________________________________________________________________
Instant message with integrated webcam using MSN Messenger 6.0. Try it now
FREE! http://msnmessenger-download.com
On Wed, 2003-10-01 at 20:25, Jingren Zhou wrote:
From the document, it seems that PREPARE/EXECUTE works only in the same
session. I am wondering whether postgres can prepare a query (save the plan)
for difference backends.
The decision to store prepared statements per-backend, rather than in
shared memory, was made deliberately. In fact, an early version of the
PREPARE/EXECUTE patch (written by Karel Zak) stored prepared statements
in shared memory. But I decided to remove this, because:
- it is more complex
- since shared memory must be allocated statically on postmaster
startup, it would make prepared statements more fragile: at some point
we would run out of room in shm, and need to either remove prepared
statements, or swap them out to disk
- it would encourage poor application design, since it wouldn't be
trivial to tell whether a given prepared query has already been prepared
by a different backend, and what name it is using
- the performance gains are not that dramatic: preparing a statement
once per active backend is not that expensive. In most of the cases
where prepared statements are useful, since the # of backends is usually
far smaller than the # of times you're executing a given prepared
statement
That's all the reasons I can think of off the top of my head for doing
things the way we do. However, I'm open to being convinced: if you think
we should store prepared statements in shm, feel free to make a case for
it.
-Neil
On Wed, 1 Oct 2003, Jingren Zhou wrote:
Hi,
From the document, it seems that PREPARE/EXECUTE works only in the same
session. I am wondering whether postgres can prepare a query (save the plan)
for difference backends.I am working on a project which requires executing "psql -c 'query'" in
command line multiple times. Since the performance is critical, it would be
nice to prepare the same query first to avoid being parsed/optimized each
time. But psql opens a new backend each time, it looks like that
PREPARE/EXECUTE doesn't work. Is there any workaround?
Your real overhead here isn't from having to prepare the query each time,
it's from having to start psql and open a new connection each time.
Perhaps you need to rethink your design and go with something that will
maintain a persistent connection.
Kris Jurka
Neil Conway <neilc@samurai.com> writes:
The decision to store prepared statements per-backend, rather than in
shared memory, was made deliberately. In fact, an early version of the
PREPARE/EXECUTE patch (written by Karel Zak) stored prepared statements
in shared memory. But I decided to remove this, because:
[ several good reasons ]
Another issue is that we currently don't have a mechanism for flushing
query plans when they become obsolete (eg, an index is added or
removed). Locally-cached plans are relatively easy to refresh: just
start a fresh session. A shared plan cache would retain bogus plans
forever, short of a postmaster restart.
Obviously we need a mechanism for detecting and handling cached-plan
invalidations, and I hope someone will get around to that soon.
But we *cannot* consider a shared plan cache until that mechanism
exists.
If I recall correctly, Karel's original shared plan cache also triggered
a lot of concern about contention for the shared data structure ...
I'm not convinced that it would be a big bottleneck, but there's
definitely an issue to think about there ...
regards, tom lane
On Wed, 2003-10-01 at 22:43, Tom Lane wrote:
Another issue is that we currently don't have a mechanism for flushing
query plans when they become obsolete (eg, an index is added or
removed). Locally-cached plans are relatively easy to refresh: just
start a fresh session. A shared plan cache would retain bogus plans
forever, short of a postmaster restart.
Well, keep in mind we already have DEALLOCATE for removing prepared
statements, which would continue to be available if we switched to
storing prepared statements in shared memory. However, using DEALLOCATE
to get around invalid cached plans is obviously not a good solution.
Obviously we need a mechanism for detecting and handling cached-plan
invalidations, and I hope someone will get around to that soon.
Agreed.
But we *cannot* consider a shared plan cache until that mechanism
exists.
Given the presence of DEALLOCATE, I think this overstates the case
somewhat: longer-lived prepared statements that are stored in shared
memory makes handling invalidated plans more of an issue, of course.
-Neil
On Wed, Oct 01, 2003 at 09:01:23PM -0400, Neil Conway wrote:
On Wed, 2003-10-01 at 20:25, Jingren Zhou wrote:
From the document, it seems that PREPARE/EXECUTE works only in the same
session. I am wondering whether postgres can prepare a query (save the plan)
for difference backends.The decision to store prepared statements per-backend, rather than in
shared memory, was made deliberately. In fact, an early version of the
PREPARE/EXECUTE patch (written by Karel Zak) stored prepared statements
in shared memory. But I decided to remove this, because:
That's all the reasons I can think of off the top of my head for doing
things the way we do. However, I'm open to being convinced: if you think
we should store prepared statements in shm, feel free to make a case for
it.
I think the current non-shared PREPARE/EXECUTE is right solution. The
shared version require define new type of memory managment routines
which is full compatible with the standard and abstract PostgreSQL mmgr
tier. I worked on this because it was interesting experiment and now
we know that write something like this is possible :-)
I think final and right solution are persisten pre-forked backends
which know to keep cached PREPARE/EXECUTE stuff (and a lot of others
things) in own memory. It's nice and simple solution than use shared
memory.
Karel
--
Karel Zak <zakkr@zf.jcu.cz>
http://home.zf.jcu.cz/~zakkr/