The query cache - first snapshot (long)

Started by Karel Zakover 25 years ago7 messages
#1Karel Zak
zakkr@zf.jcu.cz

The Query Cache
~~~~~~~~~~~~~~~
(excuse me, if you obtain this email twice; first I sent it with patch in
attache, but this list has probably some limit, because email still not in
the list. Hmm...)

Now, the patch is available at:

ftp://ftp2.zf.jcu.cz/users/zakkr/pg/pg_qcache-07182000.patch.tar.gz

The patch must be used for current (07/18/2000) CVS version. Because code in
the CVS is under very active development, you can load full PG source with
query cache from:

ftp://ftp2.zf.jcu.cz/users/zakkr/pg/pg_qcache-07182000.tar.gz

or you can download source from CVS (source from 07/18/2000):

export CVSROOT=":pserver:anoncvs@postgresql.org:/home/projects/pgsql/cvsroot"
cvs login
cvs co -D "07/18/2000 12:00" pgsql
cd pgsql/src
patch -p1 < pqsql-qcache-07182000.patch

The Query Cache and new SPI description
=======================================

Note: cache is based on new memory design.

Implementation
~~~~~~~~~~~~~~
The qCache allows to save queryTree and queryPlan. Available are two space
for data caching.

LOCAL - data are cached in backend non-shared memory and data aren't
available in other backends.

SHARE - data are cached in backend shared memory and data are
visible in all backends.

Because size of share memory pool is limited and it's set during
postmaster start, the qCache must remove all old planns if pool is
full. You can mark each entry as "REMOVEABLE" or "NOTREMOVEABLE".

The removeable entry is removed if pool is full and entry is last
in list that keep track usage of entry.

A not-removeable entry must be removed via qCache_Remove() or
the other routines. The qCache not remove this entry itself.

All records in the qCache are cached in the hash table under some key. The
qCache knows two alternate of key --- "KEY_STRING" and "KEY_BINARY". A
key must be always less or equal "QCACHE_HASH_KEYSIZE" (128b)

The qCache API not allows to access to shared memory, all cached planns
that API returns are copy to CurrentMemoryContext or to defined context.
All (qCache_ ) routines lock shmem itself (exception is
qCache_RemoveOldest_ShareRemoveAble()).

- for locking is used spin lock.

Memory management
~~~~~~~~~~~~~~~~~
The qCache use for qCache's shared pool organized via memory contexts
independent on standard aset/mcxt, but use compatible API --- it allows
to use standard palloc() (it is very needful for basic plan-tree operations,
an example for copyObject()). The qCache memory management is very simular
to current aset.c code. It is chunked blocks too, but the block is smaller
- 1024b.

The number of blocks is available set in postmaster 'argv' via option
'-Z'.

For planns storing is used separate MemoryContext for each plan, it
is good idea (Hiroshi's ?), bucause create new context is simple and
inexpensive and allows easy destroy (free) cached plan. This method is
used in my SPI overhaul instead TopMemoryContext feeding.

Postmaster
~~~~~~~~~~
The query cache memory is init during potmaster startup. The size of
query cache pool is set via '-Z <number-of-blocks>' switch --- default
is 100 blocks where 1 block = 1024b, it is sufficient for 20-30 cached
planns. One query needs somewhere 3-10 blocks, for example query like

PREPARE sel AS SELECT * FROM pg_class;

needs 10Kb, because table pg_class has very much columns.
--

Note: for development I add SQL function: "SELECT qcache_state();",
this routine show usage of qCache.

SPI
~~~
I a little overwrite SPI save plan method and remove TopMemoryContext
"feeding" (already discussed).

Standard SPI:

SPI_saveplan() - save each plan to separate standard memory context.

SPI_freeplan() - free plan.

By key SPI:

It is SPI interface for query cache and allows save planns to SHARED
or LOCAL cache 'by' arbitrary key (string or binary). Routines:

SPI_saveplan_bykey() - save plan to query cache

SPI_freeplan_bykey() - remove plan from query cache

SPI_fetchplan_bykey() - fetch plan saved in query cache

SPI_execp_bykey() - execute (via SPI) plan saved in query
cache

- now, users can write functions that save planns to shared memory
and planns are visible in all backend and are persistent arcoss
connection.

Example:
~~~~~~~
/* ----------
* Save/exec query from shared cache via string key
* ----------
*/
int keySize = 0;
flag = SPI_BYKEY_SHARE | SPI_BYKEY_STRING;
char *key = "my unique key";

res = SPI_execp_bykey(values, nulls, tcount, key, flag, keySize);

if (res == SPI_ERROR_PLANNOTFOUND)
{
/* --- not plan in cache - must create it --- */

void *plan;

plan = SPI_prepare(querystr, valnum, valtypes);
SPI_saveplan_bykey(plan, key, keySize, flag);

res = SPI_execute(plan, values, Nulls, tcount);
}

elog(NOTICE, "Processed: %d", SPI_processed);

PREPARE/EXECUTE
~~~~~~~~~~~~~~~
* Syntax:

PREPARE <name> AS <query>
[ USING type, ... typeN ]
[ NOSHARE | SHARE | GLOBAL ]

EXECUTE <name>
[ INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table ]
[ USING val, ... valN ]
[ NOSHARE | SHARE | GLOBAL ]

DEALLOCATE PREPARE
[ <name> [ NOSHARE | SHARE | GLOBAL ]]
[ ALL | ALL INTERNAL ]

I know that it is a little out of SQL92... (use CREATE/DROP PLAN instead
this?) --- what mean SQL standard guru?

* Where:

NOSHARE --- cached in local backend query cache - not accessable
from the others backends and not is persisten a across
conection.

SHARE --- cached in shared query cache and accessable from
all backends which work over same database.

GLOBAL --- cached in shared query cache and accessable from
all backends and all databases.

- default is 'SHARE'

Deallocate:

ALL --- deallocate all users's plans

ALL INTERNAL --- deallocate all internal plans, like planns
cached via SPI. It is needful if user
alter/drop table ...etc.

* Parameters:

"USING" part in the prepare statement is for datetype setting for
paremeters in the query. For example:

PREPARE sel AS SELECT * FROM pg_class WHERE relname ~~ $1 USING text;

EXECUTE sel USING 'pg%';

* Limitation:

- prepare/execute allow use full statement of SELECT/INSERT/DELETE/
UPDATE.
- possible is use union, subselects, limit, ofset, select-into

Performance:
~~~~~~~~~~~
* the SPI

- I for my tests a little change RI triggers to use SPI by_key API
and save planns to shared qCache instead to internal RI hash table.

The RI use very simple (for parsing) queries and qCache interest is
not visible. It's better if backend very often startup and RI check
always same tables. In this situation speed go up --- 10-12%.
(This snapshot not include this RI change.)

But all depend on how much complicate for parser is query in
trigger.

* PREPARE/EXECUTE

- For tests I use query that not use some table (the executor is
in boredom state), but is difficult for the parser. An example:

SELECT 'a text ' || (10*10+(100^2))::text || ' next text ' || cast
(date_part('year', timestamp 'now') AS text );

- (10000 * this query):

standard select: 54 sec
via prepare/execute: 4 sec (93% better)

IMHO it is nod bad.

- For standard query like:

SELECT u.usename, r.relname FROM pg_class r, pg_user u WHERE
r.relowner = u.usesysid;

it is with PREPARE/EXECUTE 10-20% faster.

I will *very glad* if someone try and test patch; some discussion is wanted
too.

Thanks.

Karel

PS. Excuse me, my English is poor and this text is long --- it is not good
combination...

#2Bruce Momjian
pgman@candle.pha.pa.us
In reply to: Karel Zak (#1)
Re: The query cache - first snapshot (long)

* PREPARE/EXECUTE

- For tests I use query that not use some table (the executor is
in boredom state), but is difficult for the parser. An example:

SELECT 'a text ' || (10*10+(100^2))::text || ' next text ' || cast
(date_part('year', timestamp 'now') AS text );

- (10000 * this query):

standard select: 54 sec
via prepare/execute: 4 sec (93% better)

IMHO it is nod bad.

- For standard query like:

SELECT u.usename, r.relname FROM pg_class r, pg_user u WHERE
r.relowner = u.usesysid;

it is with PREPARE/EXECUTE 10-20% faster.

I will *very glad* if someone try and test patch; some discussion is wanted
too.

Wow, just when I thought we couldnd't get much faster. That is great.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
#3Karel Zak
zakkr@zf.jcu.cz
In reply to: Bruce Momjian (#2)
Re: The query cache - first snapshot (long)

On Wed, 26 Jul 2000, Bruce Momjian wrote:

* PREPARE/EXECUTE

- For tests I use query that not use some table (the executor is
in boredom state), but is difficult for the parser. An example:

SELECT 'a text ' || (10*10+(100^2))::text || ' next text ' || cast
(date_part('year', timestamp 'now') AS text );

- (10000 * this query):

standard select: 54 sec
via prepare/execute: 4 sec (93% better)

IMHO it is nod bad.

- For standard query like:

SELECT u.usename, r.relname FROM pg_class r, pg_user u WHERE
r.relowner = u.usesysid;

it is with PREPARE/EXECUTE 10-20% faster.

I will *very glad* if someone try and test patch; some discussion is wanted
too.

Wow, just when I thought we couldnd't get much faster. That is great.

Very Thanks!

Your answer is first during one week when this snapshot is outside... I'm a
little worry that it is unconcern for the others.

It is not only PREPARE/EXECUTE it is new SPI_saveplan() design that is
correct to current new Tom's memory design, also here is new SPI 'bykey'
interface for query save/exec --- it can be good for PL those not must run
some internal save-query-management and use self hash tables, ..etc.

Karel

#4Karel Zak
zakkr@zf.jcu.cz
In reply to: Karel Zak (#1)
quiet? Re: The query cache - first snapshot (long)

Still *quiet* for this theme? I output it two weeks ago and I haven't
still some reaction. I can stop work on this if it is not wanted and not
interesting...

Karel

On Wed, 19 Jul 2000, Karel Zak wrote:

Show quoted text

The Query Cache and new SPI description
=======================================

Note: cache is based on new memory design.

Implementation
~~~~~~~~~~~~~~
The qCache allows to save queryTree and queryPlan. Available are two space
for data caching.

LOCAL - data are cached in backend non-shared memory and data aren't
available in other backends.

SHARE - data are cached in backend shared memory and data are
visible in all backends.

Because size of share memory pool is limited and it's set during
postmaster start, the qCache must remove all old planns if pool is
full. You can mark each entry as "REMOVEABLE" or "NOTREMOVEABLE".

The removeable entry is removed if pool is full and entry is last
in list that keep track usage of entry.

A not-removeable entry must be removed via qCache_Remove() or
the other routines. The qCache not remove this entry itself.

All records in the qCache are cached in the hash table under some key. The
qCache knows two alternate of key --- "KEY_STRING" and "KEY_BINARY". A
key must be always less or equal "QCACHE_HASH_KEYSIZE" (128b)

The qCache API not allows to access to shared memory, all cached planns
that API returns are copy to CurrentMemoryContext or to defined context.
All (qCache_ ) routines lock shmem itself (exception is
qCache_RemoveOldest_ShareRemoveAble()).

- for locking is used spin lock.

Memory management
~~~~~~~~~~~~~~~~~
The qCache use for qCache's shared pool organized via memory contexts
independent on standard aset/mcxt, but use compatible API --- it allows
to use standard palloc() (it is very needful for basic plan-tree operations,
an example for copyObject()). The qCache memory management is very simular
to current aset.c code. It is chunked blocks too, but the block is smaller
- 1024b.

The number of blocks is available set in postmaster 'argv' via option
'-Z'.

For planns storing is used separate MemoryContext for each plan, it
is good idea (Hiroshi's ?), bucause create new context is simple and
inexpensive and allows easy destroy (free) cached plan. This method is
used in my SPI overhaul instead TopMemoryContext feeding.

Postmaster
~~~~~~~~~~
The query cache memory is init during potmaster startup. The size of
query cache pool is set via '-Z <number-of-blocks>' switch --- default
is 100 blocks where 1 block = 1024b, it is sufficient for 20-30 cached
planns. One query needs somewhere 3-10 blocks, for example query like

PREPARE sel AS SELECT * FROM pg_class;

needs 10Kb, because table pg_class has very much columns.
--

Note: for development I add SQL function: "SELECT qcache_state();",
this routine show usage of qCache.

SPI
~~~
I a little overwrite SPI save plan method and remove TopMemoryContext
"feeding" (already discussed).

Standard SPI:

SPI_saveplan() - save each plan to separate standard memory context.

SPI_freeplan() - free plan.

By key SPI:

It is SPI interface for query cache and allows save planns to SHARED
or LOCAL cache 'by' arbitrary key (string or binary). Routines:

SPI_saveplan_bykey() - save plan to query cache

SPI_freeplan_bykey() - remove plan from query cache

SPI_fetchplan_bykey() - fetch plan saved in query cache

SPI_execp_bykey() - execute (via SPI) plan saved in query
cache

- now, users can write functions that save planns to shared memory
and planns are visible in all backend and are persistent arcoss
connection.

Example:
~~~~~~~
/* ----------
* Save/exec query from shared cache via string key
* ----------
*/
int keySize = 0;
flag = SPI_BYKEY_SHARE | SPI_BYKEY_STRING;
char *key = "my unique key";

res = SPI_execp_bykey(values, nulls, tcount, key, flag, keySize);

if (res == SPI_ERROR_PLANNOTFOUND)
{
/* --- not plan in cache - must create it --- */

void *plan;

plan = SPI_prepare(querystr, valnum, valtypes);
SPI_saveplan_bykey(plan, key, keySize, flag);

res = SPI_execute(plan, values, Nulls, tcount);
}

elog(NOTICE, "Processed: %d", SPI_processed);

PREPARE/EXECUTE
~~~~~~~~~~~~~~~
* Syntax:

PREPARE <name> AS <query>
[ USING type, ... typeN ]
[ NOSHARE | SHARE | GLOBAL ]

EXECUTE <name>
[ INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table ]
[ USING val, ... valN ]
[ NOSHARE | SHARE | GLOBAL ]

DEALLOCATE PREPARE
[ <name> [ NOSHARE | SHARE | GLOBAL ]]
[ ALL | ALL INTERNAL ]

I know that it is a little out of SQL92... (use CREATE/DROP PLAN instead
this?) --- what mean SQL standard guru?

* Where:

NOSHARE --- cached in local backend query cache - not accessable
from the others backends and not is persisten a across
conection.

SHARE --- cached in shared query cache and accessable from
all backends which work over same database.

GLOBAL --- cached in shared query cache and accessable from
all backends and all databases.

- default is 'SHARE'

Deallocate:

ALL --- deallocate all users's plans

ALL INTERNAL --- deallocate all internal plans, like planns
cached via SPI. It is needful if user
alter/drop table ...etc.

* Parameters:

"USING" part in the prepare statement is for datetype setting for
paremeters in the query. For example:

PREPARE sel AS SELECT * FROM pg_class WHERE relname ~~ $1 USING text;

EXECUTE sel USING 'pg%';

* Limitation:

- prepare/execute allow use full statement of SELECT/INSERT/DELETE/
UPDATE.
- possible is use union, subselects, limit, ofset, select-into

Performance:
~~~~~~~~~~~
* the SPI

- I for my tests a little change RI triggers to use SPI by_key API
and save planns to shared qCache instead to internal RI hash table.

The RI use very simple (for parsing) queries and qCache interest is
not visible. It's better if backend very often startup and RI check
always same tables. In this situation speed go up --- 10-12%.
(This snapshot not include this RI change.)

But all depend on how much complicate for parser is query in
trigger.

* PREPARE/EXECUTE

- For tests I use query that not use some table (the executor is
in boredom state), but is difficult for the parser. An example:

SELECT 'a text ' || (10*10+(100^2))::text || ' next text ' || cast
(date_part('year', timestamp 'now') AS text );

- (10000 * this query):

standard select: 54 sec
via prepare/execute: 4 sec (93% better)

IMHO it is nod bad.

- For standard query like:

SELECT u.usename, r.relname FROM pg_class r, pg_user u WHERE
r.relowner = u.usesysid;

it is with PREPARE/EXECUTE 10-20% faster.

I will *very glad* if someone try and test patch; some discussion is wanted
too.

Thanks.

Karel

PS. Excuse me, my English is poor and this text is long --- it is not good
combination...

#5Alfred Perlstein
bright@wintelcom.net
In reply to: Karel Zak (#4)
Re: quiet? Re: The query cache - first snapshot (long)

* Karel Zak <zakkr@zf.jcu.cz> [000731 02:52] wrote:

Still *quiet* for this theme? I output it two weeks ago and I haven't
still some reaction. I can stop work on this if it is not wanted and not
interesting...

I'd really like to see it go into postgresql.

Who dropped the ball on this? :)

-Alfred

#6Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alfred Perlstein (#5)
Re: quiet? Re: The query cache - first snapshot (long)

Alfred Perlstein <bright@wintelcom.net> writes:

* Karel Zak <zakkr@zf.jcu.cz> [000731 02:52] wrote:
I'd really like to see it go into postgresql.

Who dropped the ball on this? :)

It needs review. Careful review. I haven't had time and I guess the
other core members haven't either. I'm hoping to have more time soon
though...

regards, tom lane

#7Karel Zak
zakkr@zf.jcu.cz
In reply to: Tom Lane (#6)
Re: quiet? Re: The query cache - first snapshot (long)

On Mon, 31 Jul 2000, Tom Lane wrote:

Alfred Perlstein <bright@wintelcom.net> writes:

* Karel Zak <zakkr@zf.jcu.cz> [000731 02:52] wrote:
I'd really like to see it go into postgresql.

Who dropped the ball on this? :)

It needs review. Careful review. I haven't had time and I guess the
other core members haven't either. I'm hoping to have more time soon
though...

Thanks Tom.

Karel

BTW. --- testers not must be always core members, it is gage to
others too :-)