Bitmapscan changes
Here's a patch to change the amgetmulti API so that it's called only
once per scan, and the indexam adds *all* matching tuples at once to a
caller-supplied TIDBitmap. Per Tom's proposal in July 2006:
http://archives.postgresql.org/pgsql-hackers/2006-07/msg01233.php
The patch also adds support for candidate matches. An index scan can
indicate that the tuples it's returning are candidates, and the executor
will recheck the original scan quals of any candidate matches when the
tuple is fetched from heap. The candidate status is tracked in TIDBitmap
on a per-page basis.
No current indexams return candidate matches, but they can (and are with
the patch) also be used when bitmap ANDing a lossy and non-lossy page:
the result is a non-lossy candidate page, containing the bits of the
non-lossy page.
The motivation for adding the support for candidate matches is that GIT
/ clustered indexes need it. It's likely that we'll modify the API
further to add support for the stream bitmaps when the bitmap indexam
patch moves forward, but this is a step in the right direction and
provides some immediate benefit.
I added some regression tests to test bitmap AND and OR with a mixture
of lossy and non-lossy pages, and to test the GIN getbitmap function
which wasn't being exercised by any existing the regression tests.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Attachments:
getmulti_to_getbitmap.patchtext/x-diff; name=getmulti_to_getbitmap.patchDownload
Index: doc/src/sgml/catalogs.sgml
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/doc/src/sgml/catalogs.sgml,v
retrieving revision 2.145
diff -c -r2.145 catalogs.sgml
*** doc/src/sgml/catalogs.sgml 14 Feb 2007 01:58:55 -0000 2.145
--- doc/src/sgml/catalogs.sgml 8 Mar 2007 21:39:15 -0000
***************
*** 436,445 ****
</row>
<row>
! <entry><structfield>amgetmulti</structfield></entry>
<entry><type>regproc</type></entry>
<entry><literal><link linkend="catalog-pg-proc"><structname>pg_proc</structname></link>.oid</literal></entry>
! <entry><quote>Fetch multiple tuples</quote> function</entry>
</row>
<row>
--- 436,445 ----
</row>
<row>
! <entry><structfield>amgetbitmap</structfield></entry>
<entry><type>regproc</type></entry>
<entry><literal><link linkend="catalog-pg-proc"><structname>pg_proc</structname></link>.oid</literal></entry>
! <entry><quote>Fetch all valid tuples</quote> function</entry>
</row>
<row>
Index: doc/src/sgml/indexam.sgml
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/doc/src/sgml/indexam.sgml,v
retrieving revision 2.22
diff -c -r2.22 indexam.sgml
*** doc/src/sgml/indexam.sgml 22 Feb 2007 22:00:22 -0000 2.22
--- doc/src/sgml/indexam.sgml 8 Mar 2007 21:41:02 -0000
***************
*** 317,339 ****
<para>
<programlisting>
! boolean
! amgetmulti (IndexScanDesc scan,
! ItemPointer tids,
! int32 max_tids,
! int32 *returned_tids);
</programlisting>
! Fetch multiple tuples in the given scan. Returns TRUE if the scan should
! continue, FALSE if no matching tuples remain. <literal>tids</> points to
! a caller-supplied array of <literal>max_tids</>
! <structname>ItemPointerData</> records, which the call fills with TIDs of
! matching tuples. <literal>*returned_tids</> is set to the number of TIDs
! actually returned. This can be less than <literal>max_tids</>, or even
! zero, even when the return value is TRUE. (This provision allows the
! access method to choose the most efficient stopping points in its scan,
! for example index page boundaries.) <function>amgetmulti</> and
<function>amgettuple</> cannot be used in the same index scan; there
! are other restrictions too when using <function>amgetmulti</>, as explained
in <xref linkend="index-scanning">.
</para>
--- 317,331 ----
<para>
<programlisting>
! int32
! amgetbitmap (IndexScanDesc scan,
! TIDBitmap *tbm);
</programlisting>
! Fetch all tuples in the given scan and add them to the caller-supplied
! TIDBitmap. The number of tuples fetched is returned.
! <function>amgetbitmap</> and
<function>amgettuple</> cannot be used in the same index scan; there
! are other restrictions too when using <function>amgetbitmap</>, as explained
in <xref linkend="index-scanning">.
</para>
***************
*** 488,507 ****
<para>
Instead of using <function>amgettuple</>, an index scan can be done with
! <function>amgetmulti</> to fetch multiple tuples per call. This can be
noticeably more efficient than <function>amgettuple</> because it allows
avoiding lock/unlock cycles within the access method. In principle
! <function>amgetmulti</> should have the same effects as repeated
<function>amgettuple</> calls, but we impose several restrictions to
! simplify matters. In the first place, <function>amgetmulti</> does not
! take a <literal>direction</> argument, and therefore it does not support
! backwards scan nor intrascan reversal of direction. The access method
! need not support marking or restoring scan positions during an
! <function>amgetmulti</> scan, either. (These restrictions cost little
! since it would be difficult to use these features in an
! <function>amgetmulti</> scan anyway: adjusting the caller's buffered
! list of TIDs would be complex.) Finally, <function>amgetmulti</> does
! not guarantee any locking of the returned tuples, with implications
spelled out in <xref linkend="index-locking">.
</para>
--- 480,496 ----
<para>
Instead of using <function>amgettuple</>, an index scan can be done with
! <function>amgetbitmap</> to fetch all tuples in one call. This can be
noticeably more efficient than <function>amgettuple</> because it allows
avoiding lock/unlock cycles within the access method. In principle
! <function>amgetbitmap</> should have the same effects as repeated
<function>amgettuple</> calls, but we impose several restrictions to
! simplify matters. First of all, <function>amgetbitmap</> returns all
! tuples at once and marking or restoring scan positions isn't
! supported. Secondly, the tuples are returned in a bitmap which doesn't
! have any specific ordering, which is why <function>amgetbitmap</> doesn't
! take a <literal>direction</> argument. Finally, <function>amgetbitmap</>
! does not guarantee any locking of the returned tuples, with implications
spelled out in <xref linkend="index-locking">.
</para>
***************
*** 602,608 ****
</para>
<para>
! In an <function>amgetmulti</> index scan, the access method need not
guarantee to keep an index pin on any of the returned tuples. (It would be
impractical to pin more than the last one anyway.) Therefore
it is only safe to use such scans with MVCC-compliant snapshots.
--- 591,597 ----
</para>
<para>
! In an <function>amgetbitmap</> index scan, the access method need not
guarantee to keep an index pin on any of the returned tuples. (It would be
impractical to pin more than the last one anyway.) Therefore
it is only safe to use such scans with MVCC-compliant snapshots.
Index: src/backend/access/gin/ginget.c
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/gin/ginget.c,v
retrieving revision 1.7
diff -c -r1.7 ginget.c
*** src/backend/access/gin/ginget.c 1 Feb 2007 04:16:08 -0000 1.7
--- src/backend/access/gin/ginget.c 12 Mar 2007 11:18:29 -0000
***************
*** 423,456 ****
#define GinIsVoidRes(s) ( ((GinScanOpaque) scan->opaque)->isVoidRes == true )
Datum
! gingetmulti(PG_FUNCTION_ARGS)
{
IndexScanDesc scan = (IndexScanDesc) PG_GETARG_POINTER(0);
! ItemPointer tids = (ItemPointer) PG_GETARG_POINTER(1);
! int32 max_tids = PG_GETARG_INT32(2);
! int32 *returned_tids = (int32 *) PG_GETARG_POINTER(3);
if (GinIsNewKey(scan))
newScanKey(scan);
- *returned_tids = 0;
-
if (GinIsVoidRes(scan))
! PG_RETURN_BOOL(false);
startScan(scan);
! do
{
! if (scanGetItem(scan, tids + *returned_tids))
! (*returned_tids)++;
! else
break;
! } while (*returned_tids < max_tids);
stopScan(scan);
! PG_RETURN_BOOL(*returned_tids == max_tids);
}
Datum
--- 423,457 ----
#define GinIsVoidRes(s) ( ((GinScanOpaque) scan->opaque)->isVoidRes == true )
Datum
! gingetbitmap(PG_FUNCTION_ARGS)
{
IndexScanDesc scan = (IndexScanDesc) PG_GETARG_POINTER(0);
! TIDBitmap *tbm = (TIDBitmap *) PG_GETARG_POINTER(1);
! int32 ntids;
if (GinIsNewKey(scan))
newScanKey(scan);
if (GinIsVoidRes(scan))
! PG_RETURN_INT32(0);
startScan(scan);
! ntids = 0;
! for(;;)
{
! ItemPointerData iptr;
!
! if (!scanGetItem(scan, &iptr))
break;
!
! ntids++;
! tbm_add_tuples(tbm, &iptr, 1, false);
! }
stopScan(scan);
! PG_RETURN_INT32(ntids);
}
Datum
Index: src/backend/access/gist/gistget.c
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/gist/gistget.c,v
retrieving revision 1.64
diff -c -r1.64 gistget.c
*** src/backend/access/gist/gistget.c 20 Jan 2007 18:43:35 -0000 1.64
--- src/backend/access/gist/gistget.c 12 Mar 2007 11:18:24 -0000
***************
*** 22,28 ****
static OffsetNumber gistfindnext(IndexScanDesc scan, OffsetNumber n,
ScanDirection dir);
! static int gistnext(IndexScanDesc scan, ScanDirection dir, ItemPointer tids, int maxtids, bool ignore_killed_tuples);
static bool gistindex_keytest(IndexTuple tuple, IndexScanDesc scan,
OffsetNumber offset);
--- 22,30 ----
static OffsetNumber gistfindnext(IndexScanDesc scan, OffsetNumber n,
ScanDirection dir);
! static int gistnext(IndexScanDesc scan, ScanDirection dir,
! ItemPointer tid, TIDBitmap *tbm,
! bool ignore_killed_tuples);
static bool gistindex_keytest(IndexTuple tuple, IndexScanDesc scan,
OffsetNumber offset);
***************
*** 114,145 ****
* tuples, continue looping until we find a non-killed tuple that matches
* the search key.
*/
! res = (gistnext(scan, dir, &tid, 1, scan->ignore_killed_tuples)) ? true : false;
PG_RETURN_BOOL(res);
}
Datum
! gistgetmulti(PG_FUNCTION_ARGS)
{
IndexScanDesc scan = (IndexScanDesc) PG_GETARG_POINTER(0);
! ItemPointer tids = (ItemPointer) PG_GETARG_POINTER(1);
! int32 max_tids = PG_GETARG_INT32(2);
! int32 *returned_tids = (int32 *) PG_GETARG_POINTER(3);
! *returned_tids = gistnext(scan, ForwardScanDirection, tids, max_tids, false);
! PG_RETURN_BOOL(*returned_tids == max_tids);
}
/*
* Fetch a tuples that matchs the search key; this can be invoked
* either to fetch the first such tuple or subsequent matching
! * tuples. Returns true iff a matching tuple was found.
*/
static int
! gistnext(IndexScanDesc scan, ScanDirection dir, ItemPointer tids,
! int maxtids, bool ignore_killed_tuples)
{
Page p;
OffsetNumber n;
--- 116,152 ----
* tuples, continue looping until we find a non-killed tuple that matches
* the search key.
*/
! res = (gistnext(scan, dir, &tid, NULL, scan->ignore_killed_tuples)) ? true : false;
PG_RETURN_BOOL(res);
}
Datum
! gistgetbitmap(PG_FUNCTION_ARGS)
{
IndexScanDesc scan = (IndexScanDesc) PG_GETARG_POINTER(0);
! TIDBitmap *tbm = (TIDBitmap *) PG_GETARG_POINTER(1);
! int32 ntids;
! ntids = gistnext(scan, ForwardScanDirection, NULL, tbm, false);
! PG_RETURN_INT32(ntids);
}
/*
* Fetch a tuples that matchs the search key; this can be invoked
* either to fetch the first such tuple or subsequent matching
! * tuples.
! *
! * This function is used by both gistgettuple and gistgetbitmap. When
! * invoked from gistgettuple, tbm is null and the next matching tuple
! * is returned in *tid. When invoked from getbitmap, tid is null and
! * all matching tuples are added to tbm and tid is null. In both cases,
! * the number of returned tuples is returned.
*/
static int
! gistnext(IndexScanDesc scan, ScanDirection dir, ItemPointer tid,
! TIDBitmap *tbm, bool ignore_killed_tuples)
{
Page p;
OffsetNumber n;
***************
*** 292,304 ****
if (!(ignore_killed_tuples && ItemIdDeleted(PageGetItemId(p, n))))
{
it = (IndexTuple) PageGetItem(p, PageGetItemId(p, n));
! tids[ntids] = scan->xs_ctup.t_self = it->t_tid;
! ntids++;
!
! if (ntids == maxtids)
{
LockBuffer(so->curbuf, GIST_UNLOCK);
! return ntids;
}
}
}
--- 299,313 ----
if (!(ignore_killed_tuples && ItemIdDeleted(PageGetItemId(p, n))))
{
it = (IndexTuple) PageGetItem(p, PageGetItemId(p, n));
! if(tbm != NULL)
! tbm_add_tuples(tbm, &it->t_tid, 1, false);
! else
{
+ *tid = scan->xs_ctup.t_self = it->t_tid;
+ ntids++;
+
LockBuffer(so->curbuf, GIST_UNLOCK);
! return ntids; /* always 1 */
}
}
}
Index: src/backend/access/hash/hash.c
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/hash/hash.c,v
retrieving revision 1.93
diff -c -r1.93 hash.c
*** src/backend/access/hash/hash.c 20 Jan 2007 18:43:35 -0000 1.93
--- src/backend/access/hash/hash.c 12 Mar 2007 11:18:17 -0000
***************
*** 239,310 ****
/*
! * hashgetmulti() -- get multiple tuples at once
! *
! * This is a somewhat generic implementation: it avoids lock reacquisition
! * overhead, but there's no smarts about picking especially good stopping
! * points such as index page boundaries.
*/
Datum
! hashgetmulti(PG_FUNCTION_ARGS)
{
IndexScanDesc scan = (IndexScanDesc) PG_GETARG_POINTER(0);
! ItemPointer tids = (ItemPointer) PG_GETARG_POINTER(1);
! int32 max_tids = PG_GETARG_INT32(2);
! int32 *returned_tids = (int32 *) PG_GETARG_POINTER(3);
HashScanOpaque so = (HashScanOpaque) scan->opaque;
- Relation rel = scan->indexRelation;
bool res = true;
int32 ntids = 0;
! /*
! * We hold pin but not lock on current buffer while outside the hash AM.
! * Reacquire the read lock here.
! */
! if (BufferIsValid(so->hashso_curbuf))
! _hash_chgbufaccess(rel, so->hashso_curbuf, HASH_NOLOCK, HASH_READ);
! while (ntids < max_tids)
{
! /*
! * Start scan, or advance to next tuple.
! */
! if (ItemPointerIsValid(&(so->hashso_curpos)))
! res = _hash_next(scan, ForwardScanDirection);
! else
! res = _hash_first(scan, ForwardScanDirection);
!
/*
* Skip killed tuples if asked to.
*/
if (scan->ignore_killed_tuples)
{
! while (res)
! {
! Page page;
! OffsetNumber offnum;
! offnum = ItemPointerGetOffsetNumber(&(so->hashso_curpos));
! page = BufferGetPage(so->hashso_curbuf);
! if (!ItemIdDeleted(PageGetItemId(page, offnum)))
! break;
! res = _hash_next(scan, ForwardScanDirection);
! }
}
- if (!res)
- break;
/* Save tuple ID, and continue scanning */
! tids[ntids] = scan->xs_ctup.t_self;
! ntids++;
! }
! /* Release read lock on current buffer, but keep it pinned */
! if (BufferIsValid(so->hashso_curbuf))
! _hash_chgbufaccess(rel, so->hashso_curbuf, HASH_READ, HASH_NOLOCK);
! *returned_tids = ntids;
! PG_RETURN_BOOL(res);
}
--- 239,286 ----
/*
! * hashgetbitmap() -- get multiple tuples at once
*/
Datum
! hashgetbitmap(PG_FUNCTION_ARGS)
{
IndexScanDesc scan = (IndexScanDesc) PG_GETARG_POINTER(0);
! TIDBitmap *tbm = (TIDBitmap *) PG_GETARG_POINTER(1);
HashScanOpaque so = (HashScanOpaque) scan->opaque;
bool res = true;
int32 ntids = 0;
! res = _hash_first(scan, ForwardScanDirection);
! while (res)
{
! bool add_tuple;
/*
* Skip killed tuples if asked to.
*/
if (scan->ignore_killed_tuples)
{
! Page page;
! OffsetNumber offnum;
! offnum = ItemPointerGetOffsetNumber(&(so->hashso_curpos));
! page = BufferGetPage(so->hashso_curbuf);
! add_tuple = !ItemIdDeleted(PageGetItemId(page, offnum));
}
+ else
+ add_tuple = true;
/* Save tuple ID, and continue scanning */
! if (add_tuple)
! {
! tbm_add_tuples(tbm, &scan->xs_ctup.t_self, 1, false);
! ntids++;
! }
! res = _hash_next(scan, ForwardScanDirection);
! }
! PG_RETURN_INT32(ntids);
}
Index: src/backend/access/index/indexam.c
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/index/indexam.c,v
retrieving revision 1.97
diff -c -r1.97 indexam.c
*** src/backend/access/index/indexam.c 5 Jan 2007 22:19:23 -0000 1.97
--- src/backend/access/index/indexam.c 8 Mar 2007 22:17:39 -0000
***************
*** 21,27 ****
* index_markpos - mark a scan position
* index_restrpos - restore a scan position
* index_getnext - get the next tuple from a scan
! * index_getmulti - get multiple tuples from a scan
* index_bulk_delete - bulk deletion of index tuples
* index_vacuum_cleanup - post-deletion cleanup of an index
* index_getprocid - get a support procedure OID
--- 21,27 ----
* index_markpos - mark a scan position
* index_restrpos - restore a scan position
* index_getnext - get the next tuple from a scan
! * index_getbitmap - get all tuples from a scan
* index_bulk_delete - bulk deletion of index tuples
* index_vacuum_cleanup - post-deletion cleanup of an index
* index_getprocid - get a support procedure OID
***************
*** 66,71 ****
--- 66,72 ----
#include "access/heapam.h"
#include "pgstat.h"
#include "utils/relcache.h"
+ #include "nodes/tidbitmap.h"
/* ----------------------------------------------------------------
***************
*** 510,551 ****
/* ----------------
* index_getmulti - get multiple tuples from an index scan
*
! * Collects the TIDs of multiple heap tuples satisfying the scan keys.
* Since there's no interlock between the index scan and the eventual heap
* access, this is only safe to use with MVCC-based snapshots: the heap
* item slot could have been replaced by a newer tuple by the time we get
* to it.
*
! * A TRUE result indicates more calls should occur; a FALSE result says the
! * scan is done. *returned_tids could be zero or nonzero in either case.
* ----------------
*/
! bool
! index_getmulti(IndexScanDesc scan,
! ItemPointer tids, int32 max_tids,
! int32 *returned_tids)
{
FmgrInfo *procedure;
! bool found;
SCAN_CHECKS;
! GET_SCAN_PROCEDURE(amgetmulti);
/* just make sure this is false... */
scan->kill_prior_tuple = false;
/*
! * have the am's getmulti proc do all the work.
*/
! found = DatumGetBool(FunctionCall4(procedure,
! PointerGetDatum(scan),
! PointerGetDatum(tids),
! Int32GetDatum(max_tids),
! PointerGetDatum(returned_tids)));
! pgstat_count_index_tuples(&scan->xs_pgstat_info, *returned_tids);
! return found;
}
/* ----------------
--- 511,547 ----
/* ----------------
* index_getmulti - get multiple tuples from an index scan
*
! * Adds the TIDs of all heap tuples satisfying the scan keys to a bitmap.
* Since there's no interlock between the index scan and the eventual heap
* access, this is only safe to use with MVCC-based snapshots: the heap
* item slot could have been replaced by a newer tuple by the time we get
* to it.
*
! * Returns the number of matching tuples found.
* ----------------
*/
! int32
! index_getbitmap(IndexScanDesc scan, TIDBitmap *bitmap)
{
FmgrInfo *procedure;
! int32 ntids;
SCAN_CHECKS;
! GET_SCAN_PROCEDURE(amgetbitmap);
/* just make sure this is false... */
scan->kill_prior_tuple = false;
/*
! * have the am's getbitmap proc do all the work.
*/
! ntids = DatumGetInt32(FunctionCall2(procedure,
! PointerGetDatum(scan),
! PointerGetDatum(bitmap)));
! pgstat_count_index_tuples(&scan->xs_pgstat_info, ntids);
! return ntids;
}
/* ----------------
Index: src/backend/access/nbtree/nbtree.c
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/nbtree/nbtree.c,v
retrieving revision 1.154
diff -c -r1.154 nbtree.c
*** src/backend/access/nbtree/nbtree.c 5 Jan 2007 22:19:23 -0000 1.154
--- src/backend/access/nbtree/nbtree.c 8 Mar 2007 20:02:28 -0000
***************
*** 278,302 ****
}
/*
! * btgetmulti() -- get multiple tuples at once
! *
! * In the current implementation there seems no strong reason to stop at
! * index page boundaries; we just press on until we fill the caller's buffer
! * or run out of matches.
*/
Datum
! btgetmulti(PG_FUNCTION_ARGS)
{
IndexScanDesc scan = (IndexScanDesc) PG_GETARG_POINTER(0);
! ItemPointer tids = (ItemPointer) PG_GETARG_POINTER(1);
! int32 max_tids = PG_GETARG_INT32(2);
! int32 *returned_tids = (int32 *) PG_GETARG_POINTER(3);
BTScanOpaque so = (BTScanOpaque) scan->opaque;
bool res = true;
int32 ntids = 0;
!
! if (max_tids <= 0) /* behave correctly in boundary case */
! PG_RETURN_BOOL(true);
/* If we haven't started the scan yet, fetch the first page & tuple. */
if (!BTScanPosIsValid(so->currPos))
--- 278,294 ----
}
/*
! * btgetbitmap() -- gets all matching tuples, and adds them to a bitmap
*/
Datum
! btgetbitmap(PG_FUNCTION_ARGS)
{
IndexScanDesc scan = (IndexScanDesc) PG_GETARG_POINTER(0);
! TIDBitmap *tbm = (TIDBitmap *) PG_GETARG_POINTER(1);
BTScanOpaque so = (BTScanOpaque) scan->opaque;
bool res = true;
int32 ntids = 0;
! ItemPointer heapTid;
/* If we haven't started the scan yet, fetch the first page & tuple. */
if (!BTScanPosIsValid(so->currPos))
***************
*** 305,319 ****
if (!res)
{
/* empty scan */
! *returned_tids = ntids;
! PG_RETURN_BOOL(res);
}
/* Save tuple ID, and continue scanning */
! tids[ntids] = scan->xs_ctup.t_self;
ntids++;
}
! while (ntids < max_tids)
{
/*
* Advance to next tuple within page. This is the same as the easy
--- 297,312 ----
if (!res)
{
/* empty scan */
! PG_RETURN_INT32(0);
}
/* Save tuple ID, and continue scanning */
! heapTid = &scan->xs_ctup.t_self;
! tbm_add_tuples(tbm, heapTid, 1, false);
!
ntids++;
}
! while (true)
{
/*
* Advance to next tuple within page. This is the same as the easy
***************
*** 328,339 ****
}
/* Save tuple ID, and continue scanning */
! tids[ntids] = so->currPos.items[so->currPos.itemIndex].heapTid;
ntids++;
}
! *returned_tids = ntids;
! PG_RETURN_BOOL(res);
}
/*
--- 321,333 ----
}
/* Save tuple ID, and continue scanning */
! heapTid = &so->currPos.items[so->currPos.itemIndex].heapTid;
! tbm_add_tuples(tbm, heapTid, 1, false);
!
ntids++;
}
! PG_RETURN_INT32(ntids);
}
/*
Index: src/backend/executor/nodeBitmapHeapscan.c
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/executor/nodeBitmapHeapscan.c,v
retrieving revision 1.16
diff -c -r1.16 nodeBitmapHeapscan.c
*** src/backend/executor/nodeBitmapHeapscan.c 5 Jan 2007 22:19:28 -0000 1.16
--- src/backend/executor/nodeBitmapHeapscan.c 7 Mar 2007 21:36:20 -0000
***************
*** 204,210 ****
* If we are using lossy info, we have to recheck the qual conditions
* at every tuple.
*/
! if (tbmres->ntuples < 0)
{
econtext->ecxt_scantuple = slot;
ResetExprContext(econtext);
--- 204,210 ----
* If we are using lossy info, we have to recheck the qual conditions
* at every tuple.
*/
! if (tbmres->ntuples < 0 || tbmres->iscandidates)
{
econtext->ecxt_scantuple = slot;
ResetExprContext(econtext);
Index: src/backend/executor/nodeBitmapIndexscan.c
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/executor/nodeBitmapIndexscan.c,v
retrieving revision 1.22
diff -c -r1.22 nodeBitmapIndexscan.c
*** src/backend/executor/nodeBitmapIndexscan.c 5 Jan 2007 22:19:28 -0000 1.22
--- src/backend/executor/nodeBitmapIndexscan.c 7 Mar 2007 22:25:10 -0000
***************
*** 37,46 ****
Node *
MultiExecBitmapIndexScan(BitmapIndexScanState *node)
{
- #define MAX_TIDS 1024
TIDBitmap *tbm;
IndexScanDesc scandesc;
- ItemPointerData tids[MAX_TIDS];
int32 ntids;
double nTuples = 0;
bool doscan;
--- 37,44 ----
***************
*** 91,113 ****
*/
while (doscan)
{
! bool more = index_getmulti(scandesc, tids, MAX_TIDS, &ntids);
! if (ntids > 0)
! {
! tbm_add_tuples(tbm, tids, ntids);
! nTuples += ntids;
! }
CHECK_FOR_INTERRUPTS();
! if (!more)
! {
! doscan = ExecIndexAdvanceArrayKeys(node->biss_ArrayKeys,
node->biss_NumArrayKeys);
! if (doscan) /* reset index scan */
! index_rescan(node->biss_ScanDesc, node->biss_ScanKeys);
! }
}
/* must provide our own instrumentation support */
--- 89,104 ----
*/
while (doscan)
{
! ntids = index_getbitmap(scandesc, tbm);
! nTuples += ntids;
CHECK_FOR_INTERRUPTS();
! doscan = ExecIndexAdvanceArrayKeys(node->biss_ArrayKeys,
node->biss_NumArrayKeys);
! if (doscan) /* reset index scan */
! index_rescan(node->biss_ScanDesc, node->biss_ScanKeys);
}
/* must provide our own instrumentation support */
Index: src/backend/nodes/tidbitmap.c
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/nodes/tidbitmap.c,v
retrieving revision 1.11
diff -c -r1.11 tidbitmap.c
*** src/backend/nodes/tidbitmap.c 5 Jan 2007 22:19:30 -0000 1.11
--- src/backend/nodes/tidbitmap.c 12 Mar 2007 12:04:25 -0000
***************
*** 10,16 ****
* Also, since we wish to be able to store very large tuple sets in
* memory with this data structure, we support "lossy" storage, in which
* we no longer remember individual tuple offsets on a page but only the
! * fact that a particular page needs to be visited.
*
* The "lossy" storage uses one bit per disk page, so at the standard 8K
* BLCKSZ, we can represent all pages in 64Gb of disk space in about 1Mb
--- 10,21 ----
* Also, since we wish to be able to store very large tuple sets in
* memory with this data structure, we support "lossy" storage, in which
* we no longer remember individual tuple offsets on a page but only the
! * fact that a particular page needs to be visited. We also support the
! * notion of candidate matches, which are like non-lossy matches in that
! * the individual tuple offsets are remembered, but the offsets remembered
! * are a superset of the actual matches. Candidate matches need to be
! * rechecked in the executor to see which ones really match. They are
! * used when a lossy page is intersected with a non-lossy page.
*
* The "lossy" storage uses one bit per disk page, so at the standard 8K
* BLCKSZ, we can represent all pages in 64Gb of disk space in about 1Mb
***************
*** 87,92 ****
--- 92,98 ----
{
BlockNumber blockno; /* page number (hashtable key) */
bool ischunk; /* T = lossy storage, F = exact */
+ bool iscandidate; /* should the results be rechecked? */
bitmapword words[Max(WORDS_PER_PAGE, WORDS_PER_CHUNK)];
} PagetableEntry;
***************
*** 145,150 ****
--- 151,160 ----
static void tbm_lossify(TIDBitmap *tbm);
static int tbm_comparator(const void *left, const void *right);
+ #ifdef TIDBITMAP_DEBUG
+ static void dump_pte(const PagetableEntry *e);
+ static void tbm_dump(TIDBitmap *tbm);
+ #endif
/*
* tbm_create - create an initially-empty bitmap
***************
*** 247,253 ****
* tbm_add_tuples - add some tuple IDs to a TIDBitmap
*/
void
! tbm_add_tuples(TIDBitmap *tbm, const ItemPointer tids, int ntids)
{
int i;
--- 257,263 ----
* tbm_add_tuples - add some tuple IDs to a TIDBitmap
*/
void
! tbm_add_tuples(TIDBitmap *tbm, const ItemPointer tids, int ntids, bool candidates)
{
int i;
***************
*** 281,286 ****
--- 291,297 ----
bitnum = BITNUM(off - 1);
}
page->words[wordnum] |= ((bitmapword) 1 << bitnum);
+ page->iscandidate = page->iscandidate || candidates;
if (tbm->nentries > tbm->maxentries)
tbm_lossify(tbm);
***************
*** 361,366 ****
--- 372,378 ----
/* Both pages are exact, merge at the bit level */
for (wordnum = 0; wordnum < WORDS_PER_PAGE; wordnum++)
apage->words[wordnum] |= bpage->words[wordnum];
+ apage->iscandidate = apage->iscandidate || bpage->iscandidate;
}
}
***************
*** 472,493 ****
else if (tbm_page_is_lossy(b, apage->blockno))
{
/*
! * When the page is lossy in b, we have to mark it lossy in a too. We
! * know that no bits need be set in bitmap a, but we do not know which
! * ones should be cleared, and we have no API for "at most these
! * tuples need be checked". (Perhaps it's worth adding that?)
*/
! tbm_mark_page_lossy(a, apage->blockno);
- /*
- * Note: tbm_mark_page_lossy will have removed apage from a, and may
- * have inserted a new lossy chunk instead. We can continue the same
- * seq_search scan at the caller level, because it does not matter
- * whether we visit such a new chunk or not: it will have only the bit
- * for apage->blockno set, which is correct.
- *
- * We must return false here since apage was already deleted.
- */
return false;
}
else
--- 484,496 ----
else if (tbm_page_is_lossy(b, apage->blockno))
{
/*
! * Some of the tuples in 'a' might not satisfy the quals for 'b',
! * but because the page 'b' is lossy, we don't know which ones.
! * Therefore we mark 'a' as candidate to indicate that at most
! * those tuples set in 'a' are matches.
*/
! apage->iscandidate = true;
return false;
}
else
***************
*** 505,511 ****
--- 508,516 ----
if (apage->words[wordnum] != 0)
candelete = false;
}
+ apage->iscandidate = apage->iscandidate || bpage->iscandidate;
}
+
return candelete;
}
}
***************
*** 677,682 ****
--- 682,688 ----
}
output->blockno = page->blockno;
output->ntuples = ntuples;
+ output->iscandidates = page->iscandidate;
tbm->spageptr++;
return output;
}
***************
*** 932,934 ****
--- 938,990 ----
return 1;
return 0;
}
+
+
+ #ifdef TIDBITMAP_DEBUG
+ static void
+ dump_pte(const PagetableEntry *e)
+ {
+ int i;
+ int max;
+ char str[Max(WORDS_PER_PAGE, WORDS_PER_CHUNK) * BITS_PER_BITMAPWORD + 1];
+
+ if(e->ischunk)
+ max = WORDS_PER_CHUNK * BITS_PER_BITMAPWORD;
+ else
+ max = WORDS_PER_PAGE * BITS_PER_BITMAPWORD;
+
+ for(i=0; i < max; i++)
+ {
+ if(e->words[WORDNUM(i)] & (1<<(BITNUM(i))))
+ str[i] = '1';
+ else
+ str[i] = '0';
+ }
+ str[max] = '\0';
+
+
+ elog(LOG, "blockno %d%s%s: %s", e->blockno,
+ e->ischunk ? " (lossy)" : "",
+ e->iscandidate ? " (candidates)" : "",
+ str);
+ }
+
+
+ static void
+ tbm_dump(TIDBitmap *tbm)
+ {
+ int i;
+
+ elog(LOG, "Bitmap, %d lossy and %d non-lossy pages", tbm->nchunks, tbm->npages);
+
+ if(tbm->status == TBM_ONE_PAGE)
+ dump_pte(&tbm->entry1);
+ else
+ {
+ for(i = 0; i < tbm->nchunks; i++)
+ dump_pte(tbm->schunks[i]);
+ for(i = 0; i < tbm->npages; i++)
+ dump_pte(tbm->spages[i]);
+ }
+ }
+ #endif
Index: src/include/access/genam.h
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/include/access/genam.h,v
retrieving revision 1.66
diff -c -r1.66 genam.h
*** src/include/access/genam.h 5 Jan 2007 22:19:50 -0000 1.66
--- src/include/access/genam.h 7 Mar 2007 22:06:31 -0000
***************
*** 17,22 ****
--- 17,23 ----
#include "access/relscan.h"
#include "access/sdir.h"
#include "nodes/primnodes.h"
+ #include "nodes/tidbitmap.h"
#include "storage/lock.h"
/*
***************
*** 108,116 ****
extern HeapTuple index_getnext(IndexScanDesc scan, ScanDirection direction);
extern bool index_getnext_indexitem(IndexScanDesc scan,
ScanDirection direction);
! extern bool index_getmulti(IndexScanDesc scan,
! ItemPointer tids, int32 max_tids,
! int32 *returned_tids);
extern IndexBulkDeleteResult *index_bulk_delete(IndexVacuumInfo *info,
IndexBulkDeleteResult *stats,
--- 109,115 ----
extern HeapTuple index_getnext(IndexScanDesc scan, ScanDirection direction);
extern bool index_getnext_indexitem(IndexScanDesc scan,
ScanDirection direction);
! extern int index_getbitmap(IndexScanDesc scan, TIDBitmap *bitmap);
extern IndexBulkDeleteResult *index_bulk_delete(IndexVacuumInfo *info,
IndexBulkDeleteResult *stats,
Index: src/include/access/gin.h
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/include/access/gin.h,v
retrieving revision 1.10
diff -c -r1.10 gin.h
*** src/include/access/gin.h 31 Jan 2007 15:09:45 -0000 1.10
--- src/include/access/gin.h 8 Mar 2007 19:54:45 -0000
***************
*** 422,428 ****
#define ItemPointerSetMin(p) ItemPointerSet( (p), (BlockNumber)0, (OffsetNumber)0)
#define ItemPointerIsMin(p) ( ItemPointerGetBlockNumber(p) == (BlockNumber)0 && ItemPointerGetOffsetNumber(p) == (OffsetNumber)0 )
! extern Datum gingetmulti(PG_FUNCTION_ARGS);
extern Datum gingettuple(PG_FUNCTION_ARGS);
/* ginvacuum.c */
--- 422,428 ----
#define ItemPointerSetMin(p) ItemPointerSet( (p), (BlockNumber)0, (OffsetNumber)0)
#define ItemPointerIsMin(p) ( ItemPointerGetBlockNumber(p) == (BlockNumber)0 && ItemPointerGetOffsetNumber(p) == (OffsetNumber)0 )
! extern Datum gingetbitmap(PG_FUNCTION_ARGS);
extern Datum gingettuple(PG_FUNCTION_ARGS);
/* ginvacuum.c */
Index: src/include/access/gist_private.h
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/include/access/gist_private.h,v
retrieving revision 1.26
diff -c -r1.26 gist_private.h
*** src/include/access/gist_private.h 20 Jan 2007 18:43:35 -0000 1.26
--- src/include/access/gist_private.h 8 Mar 2007 19:50:23 -0000
***************
*** 271,277 ****
/* gistget.c */
extern Datum gistgettuple(PG_FUNCTION_ARGS);
! extern Datum gistgetmulti(PG_FUNCTION_ARGS);
/* gistutil.c */
--- 271,277 ----
/* gistget.c */
extern Datum gistgettuple(PG_FUNCTION_ARGS);
! extern Datum gistgetbitmap(PG_FUNCTION_ARGS);
/* gistutil.c */
Index: src/include/access/hash.h
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/include/access/hash.h,v
retrieving revision 1.76
diff -c -r1.76 hash.h
*** src/include/access/hash.h 30 Jan 2007 01:33:36 -0000 1.76
--- src/include/access/hash.h 8 Mar 2007 19:50:47 -0000
***************
*** 235,241 ****
extern Datum hashinsert(PG_FUNCTION_ARGS);
extern Datum hashbeginscan(PG_FUNCTION_ARGS);
extern Datum hashgettuple(PG_FUNCTION_ARGS);
! extern Datum hashgetmulti(PG_FUNCTION_ARGS);
extern Datum hashrescan(PG_FUNCTION_ARGS);
extern Datum hashendscan(PG_FUNCTION_ARGS);
extern Datum hashmarkpos(PG_FUNCTION_ARGS);
--- 235,241 ----
extern Datum hashinsert(PG_FUNCTION_ARGS);
extern Datum hashbeginscan(PG_FUNCTION_ARGS);
extern Datum hashgettuple(PG_FUNCTION_ARGS);
! extern Datum hashgetbitmap(PG_FUNCTION_ARGS);
extern Datum hashrescan(PG_FUNCTION_ARGS);
extern Datum hashendscan(PG_FUNCTION_ARGS);
extern Datum hashmarkpos(PG_FUNCTION_ARGS);
Index: src/include/access/nbtree.h
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/include/access/nbtree.h,v
retrieving revision 1.111
diff -c -r1.111 nbtree.h
*** src/include/access/nbtree.h 8 Feb 2007 05:05:53 -0000 1.111
--- src/include/access/nbtree.h 7 Mar 2007 17:28:19 -0000
***************
*** 486,492 ****
extern Datum btinsert(PG_FUNCTION_ARGS);
extern Datum btbeginscan(PG_FUNCTION_ARGS);
extern Datum btgettuple(PG_FUNCTION_ARGS);
! extern Datum btgetmulti(PG_FUNCTION_ARGS);
extern Datum btrescan(PG_FUNCTION_ARGS);
extern Datum btendscan(PG_FUNCTION_ARGS);
extern Datum btmarkpos(PG_FUNCTION_ARGS);
--- 486,492 ----
extern Datum btinsert(PG_FUNCTION_ARGS);
extern Datum btbeginscan(PG_FUNCTION_ARGS);
extern Datum btgettuple(PG_FUNCTION_ARGS);
! extern Datum btgetbitmap(PG_FUNCTION_ARGS);
extern Datum btrescan(PG_FUNCTION_ARGS);
extern Datum btendscan(PG_FUNCTION_ARGS);
extern Datum btmarkpos(PG_FUNCTION_ARGS);
Index: src/include/catalog/pg_am.h
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/include/catalog/pg_am.h,v
retrieving revision 1.50
diff -c -r1.50 pg_am.h
*** src/include/catalog/pg_am.h 20 Jan 2007 23:13:01 -0000 1.50
--- src/include/catalog/pg_am.h 8 Mar 2007 20:04:27 -0000
***************
*** 55,61 ****
regproc aminsert; /* "insert this tuple" function */
regproc ambeginscan; /* "start new scan" function */
regproc amgettuple; /* "next valid tuple" function */
! regproc amgetmulti; /* "fetch multiple tuples" function */
regproc amrescan; /* "restart this scan" function */
regproc amendscan; /* "end this scan" function */
regproc ammarkpos; /* "mark current scan position" function */
--- 55,61 ----
regproc aminsert; /* "insert this tuple" function */
regproc ambeginscan; /* "start new scan" function */
regproc amgettuple; /* "next valid tuple" function */
! regproc amgetbitmap; /* "fetch multiple tuples" function */
regproc amrescan; /* "restart this scan" function */
regproc amendscan; /* "end this scan" function */
regproc ammarkpos; /* "mark current scan position" function */
***************
*** 92,98 ****
#define Anum_pg_am_aminsert 11
#define Anum_pg_am_ambeginscan 12
#define Anum_pg_am_amgettuple 13
! #define Anum_pg_am_amgetmulti 14
#define Anum_pg_am_amrescan 15
#define Anum_pg_am_amendscan 16
#define Anum_pg_am_ammarkpos 17
--- 92,98 ----
#define Anum_pg_am_aminsert 11
#define Anum_pg_am_ambeginscan 12
#define Anum_pg_am_amgettuple 13
! #define Anum_pg_am_amgetbitmap 14
#define Anum_pg_am_amrescan 15
#define Anum_pg_am_amendscan 16
#define Anum_pg_am_ammarkpos 17
***************
*** 108,123 ****
* ----------------
*/
! DATA(insert OID = 403 ( btree 5 1 t t t t t f t btinsert btbeginscan btgettuple btgetmulti btrescan btendscan btmarkpos btrestrpos btbuild btbulkdelete btvacuumcleanup btcostestimate btoptions ));
DESCR("b-tree index access method");
#define BTREE_AM_OID 403
! DATA(insert OID = 405 ( hash 1 1 f f f f f f f hashinsert hashbeginscan hashgettuple hashgetmulti hashrescan hashendscan hashmarkpos hashrestrpos hashbuild hashbulkdelete hashvacuumcleanup hashcostestimate hashoptions ));
DESCR("hash index access method");
#define HASH_AM_OID 405
! DATA(insert OID = 783 ( gist 0 7 f f t t t t t gistinsert gistbeginscan gistgettuple gistgetmulti gistrescan gistendscan gistmarkpos gistrestrpos gistbuild gistbulkdelete gistvacuumcleanup gistcostestimate gistoptions ));
DESCR("GiST index access method");
#define GIST_AM_OID 783
! DATA(insert OID = 2742 ( gin 0 4 f f f f f t f gininsert ginbeginscan gingettuple gingetmulti ginrescan ginendscan ginmarkpos ginrestrpos ginbuild ginbulkdelete ginvacuumcleanup gincostestimate ginoptions ));
DESCR("GIN index access method");
#define GIN_AM_OID 2742
--- 108,123 ----
* ----------------
*/
! DATA(insert OID = 403 ( btree 5 1 t t t t t f t btinsert btbeginscan btgettuple btgetbitmap btrescan btendscan btmarkpos btrestrpos btbuild btbulkdelete btvacuumcleanup btcostestimate btoptions ));
DESCR("b-tree index access method");
#define BTREE_AM_OID 403
! DATA(insert OID = 405 ( hash 1 1 f f f f f f f hashinsert hashbeginscan hashgettuple hashgetbitmap hashrescan hashendscan hashmarkpos hashrestrpos hashbuild hashbulkdelete hashvacuumcleanup hashcostestimate hashoptions ));
DESCR("hash index access method");
#define HASH_AM_OID 405
! DATA(insert OID = 783 ( gist 0 7 f f t t t t t gistinsert gistbeginscan gistgettuple gistgetbitmap gistrescan gistendscan gistmarkpos gistrestrpos gistbuild gistbulkdelete gistvacuumcleanup gistcostestimate gistoptions ));
DESCR("GiST index access method");
#define GIST_AM_OID 783
! DATA(insert OID = 2742 ( gin 0 4 f f f f f t f gininsert ginbeginscan gingettuple gingetbitmap ginrescan ginendscan ginmarkpos ginrestrpos ginbuild ginbulkdelete ginvacuumcleanup gincostestimate ginoptions ));
DESCR("GIN index access method");
#define GIN_AM_OID 2742
Index: src/include/catalog/pg_proc.h
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/include/catalog/pg_proc.h,v
retrieving revision 1.447
diff -c -r1.447 pg_proc.h
*** src/include/catalog/pg_proc.h 3 Mar 2007 19:52:46 -0000 1.447
--- src/include/catalog/pg_proc.h 8 Mar 2007 20:04:58 -0000
***************
*** 664,670 ****
DATA(insert OID = 330 ( btgettuple PGNSP PGUID 12 1 0 f f t f v 2 16 "2281 2281" _null_ _null_ _null_ btgettuple - _null_ ));
DESCR("btree(internal)");
! DATA(insert OID = 636 ( btgetmulti PGNSP PGUID 12 1 0 f f t f v 4 16 "2281 2281 2281 2281" _null_ _null_ _null_ btgetmulti - _null_ ));
DESCR("btree(internal)");
DATA(insert OID = 331 ( btinsert PGNSP PGUID 12 1 0 f f t f v 6 16 "2281 2281 2281 2281 2281 2281" _null_ _null_ _null_ btinsert - _null_ ));
DESCR("btree(internal)");
--- 664,670 ----
DATA(insert OID = 330 ( btgettuple PGNSP PGUID 12 1 0 f f t f v 2 16 "2281 2281" _null_ _null_ _null_ btgettuple - _null_ ));
DESCR("btree(internal)");
! DATA(insert OID = 636 ( btgetbitmap PGNSP PGUID 12 1 0 f f t f v 4 16 "2281 2281 2281 2281" _null_ _null_ _null_ btgetbitmap - _null_ ));
DESCR("btree(internal)");
DATA(insert OID = 331 ( btinsert PGNSP PGUID 12 1 0 f f t f v 6 16 "2281 2281 2281 2281 2281 2281" _null_ _null_ _null_ btinsert - _null_ ));
DESCR("btree(internal)");
***************
*** 783,789 ****
DATA(insert OID = 440 ( hashgettuple PGNSP PGUID 12 1 0 f f t f v 2 16 "2281 2281" _null_ _null_ _null_ hashgettuple - _null_ ));
DESCR("hash(internal)");
! DATA(insert OID = 637 ( hashgetmulti PGNSP PGUID 12 1 0 f f t f v 4 16 "2281 2281 2281 2281" _null_ _null_ _null_ hashgetmulti - _null_ ));
DESCR("hash(internal)");
DATA(insert OID = 441 ( hashinsert PGNSP PGUID 12 1 0 f f t f v 6 16 "2281 2281 2281 2281 2281 2281" _null_ _null_ _null_ hashinsert - _null_ ));
DESCR("hash(internal)");
--- 783,789 ----
DATA(insert OID = 440 ( hashgettuple PGNSP PGUID 12 1 0 f f t f v 2 16 "2281 2281" _null_ _null_ _null_ hashgettuple - _null_ ));
DESCR("hash(internal)");
! DATA(insert OID = 637 ( hashgetbitmap PGNSP PGUID 12 1 0 f f t f v 4 16 "2281 2281 2281 2281" _null_ _null_ _null_ hashgetbitmap - _null_ ));
DESCR("hash(internal)");
DATA(insert OID = 441 ( hashinsert PGNSP PGUID 12 1 0 f f t f v 6 16 "2281 2281 2281 2281 2281 2281" _null_ _null_ _null_ hashinsert - _null_ ));
DESCR("hash(internal)");
***************
*** 1051,1057 ****
DATA(insert OID = 774 ( gistgettuple PGNSP PGUID 12 1 0 f f t f v 2 16 "2281 2281" _null_ _null_ _null_ gistgettuple - _null_ ));
DESCR("gist(internal)");
! DATA(insert OID = 638 ( gistgetmulti PGNSP PGUID 12 1 0 f f t f v 4 16 "2281 2281 2281 2281" _null_ _null_ _null_ gistgetmulti - _null_ ));
DESCR("gist(internal)");
DATA(insert OID = 775 ( gistinsert PGNSP PGUID 12 1 0 f f t f v 6 16 "2281 2281 2281 2281 2281 2281" _null_ _null_ _null_ gistinsert - _null_ ));
DESCR("gist(internal)");
--- 1051,1057 ----
DATA(insert OID = 774 ( gistgettuple PGNSP PGUID 12 1 0 f f t f v 2 16 "2281 2281" _null_ _null_ _null_ gistgettuple - _null_ ));
DESCR("gist(internal)");
! DATA(insert OID = 638 ( gistgetbitmap PGNSP PGUID 12 1 0 f f t f v 4 16 "2281 2281 2281 2281" _null_ _null_ _null_ gistgetbitmap - _null_ ));
DESCR("gist(internal)");
DATA(insert OID = 775 ( gistinsert PGNSP PGUID 12 1 0 f f t f v 6 16 "2281 2281 2281 2281 2281 2281" _null_ _null_ _null_ gistinsert - _null_ ));
DESCR("gist(internal)");
***************
*** 3967,3973 ****
/* GIN */
DATA(insert OID = 2730 ( gingettuple PGNSP PGUID 12 1 0 f f t f v 2 16 "2281 2281" _null_ _null_ _null_ gingettuple - _null_ ));
DESCR("gin(internal)");
! DATA(insert OID = 2731 ( gingetmulti PGNSP PGUID 12 1 0 f f t f v 4 16 "2281 2281 2281 2281" _null_ _null_ _null_ gingetmulti - _null_ ));
DESCR("gin(internal)");
DATA(insert OID = 2732 ( gininsert PGNSP PGUID 12 1 0 f f t f v 6 16 "2281 2281 2281 2281 2281 2281" _null_ _null_ _null_ gininsert - _null_ ));
DESCR("gin(internal)");
--- 3967,3973 ----
/* GIN */
DATA(insert OID = 2730 ( gingettuple PGNSP PGUID 12 1 0 f f t f v 2 16 "2281 2281" _null_ _null_ _null_ gingettuple - _null_ ));
DESCR("gin(internal)");
! DATA(insert OID = 2731 ( gingetbitmap PGNSP PGUID 12 1 0 f f t f v 4 16 "2281 2281 2281 2281" _null_ _null_ _null_ gingetbitmap - _null_ ));
DESCR("gin(internal)");
DATA(insert OID = 2732 ( gininsert PGNSP PGUID 12 1 0 f f t f v 6 16 "2281 2281 2281 2281 2281 2281" _null_ _null_ _null_ gininsert - _null_ ));
DESCR("gin(internal)");
Index: src/include/nodes/tidbitmap.h
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/include/nodes/tidbitmap.h,v
retrieving revision 1.5
diff -c -r1.5 tidbitmap.h
*** src/include/nodes/tidbitmap.h 5 Jan 2007 22:19:56 -0000 1.5
--- src/include/nodes/tidbitmap.h 12 Mar 2007 12:07:03 -0000
***************
*** 36,41 ****
--- 36,42 ----
{
BlockNumber blockno; /* page number containing tuples */
int ntuples; /* -1 indicates lossy result */
+ bool iscandidates; /* do the results need to be rechecked */
OffsetNumber offsets[1]; /* VARIABLE LENGTH ARRAY */
} TBMIterateResult; /* VARIABLE LENGTH STRUCT */
***************
*** 44,50 ****
extern TIDBitmap *tbm_create(long maxbytes);
extern void tbm_free(TIDBitmap *tbm);
! extern void tbm_add_tuples(TIDBitmap *tbm, const ItemPointer tids, int ntids);
extern void tbm_union(TIDBitmap *a, const TIDBitmap *b);
extern void tbm_intersect(TIDBitmap *a, const TIDBitmap *b);
--- 45,51 ----
extern TIDBitmap *tbm_create(long maxbytes);
extern void tbm_free(TIDBitmap *tbm);
! extern void tbm_add_tuples(TIDBitmap *tbm, const ItemPointer tids, int ntids, bool candidates);
extern void tbm_union(TIDBitmap *a, const TIDBitmap *b);
extern void tbm_intersect(TIDBitmap *a, const TIDBitmap *b);
Index: src/include/utils/rel.h
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/include/utils/rel.h,v
retrieving revision 1.98
diff -c -r1.98 rel.h
*** src/include/utils/rel.h 27 Feb 2007 23:48:10 -0000 1.98
--- src/include/utils/rel.h 7 Mar 2007 22:07:49 -0000
***************
*** 107,113 ****
FmgrInfo aminsert;
FmgrInfo ambeginscan;
FmgrInfo amgettuple;
! FmgrInfo amgetmulti;
FmgrInfo amrescan;
FmgrInfo amendscan;
FmgrInfo ammarkpos;
--- 107,113 ----
FmgrInfo aminsert;
FmgrInfo ambeginscan;
FmgrInfo amgettuple;
! FmgrInfo amgetbitmap;
FmgrInfo amrescan;
FmgrInfo amendscan;
FmgrInfo ammarkpos;
Index: src/test/regress/parallel_schedule
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/test/regress/parallel_schedule,v
retrieving revision 1.39
diff -c -r1.39 parallel_schedule
*** src/test/regress/parallel_schedule 9 Feb 2007 03:35:35 -0000 1.39
--- src/test/regress/parallel_schedule 12 Mar 2007 11:04:17 -0000
***************
*** 61,67 ****
# ----------
# The fourth group of parallel test
# ----------
! test: select_into select_distinct select_distinct_on select_implicit select_having subselect union case join aggregates transactions random portals arrays btree_index hash_index update namespace prepared_xacts delete
test: privileges
test: misc
--- 61,67 ----
# ----------
# The fourth group of parallel test
# ----------
! test: select_into select_distinct select_distinct_on select_implicit select_having subselect union case join aggregates bitmapops transactions random portals arrays btree_index hash_index update namespace prepared_xacts delete
test: privileges
test: misc
Index: src/test/regress/serial_schedule
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/test/regress/serial_schedule,v
retrieving revision 1.37
diff -c -r1.37 serial_schedule
*** src/test/regress/serial_schedule 9 Feb 2007 03:35:35 -0000 1.37
--- src/test/regress/serial_schedule 12 Mar 2007 11:04:22 -0000
***************
*** 68,73 ****
--- 68,74 ----
test: case
test: join
test: aggregates
+ test: bitmapops
test: transactions
ignore: random
test: random
Index: src/test/regress/expected/create_index.out
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/test/regress/expected/create_index.out,v
retrieving revision 1.23
diff -c -r1.23 create_index.out
*** src/test/regress/expected/create_index.out 9 Jan 2007 02:14:16 -0000 1.23
--- src/test/regress/expected/create_index.out 12 Mar 2007 11:11:08 -0000
***************
*** 315,320 ****
--- 315,408 ----
96 | {23,97,43} | {AAAAAAAAAA646,A87088}
(1 row)
+ -- Repeat some of the above tests but make sure we exercise bitmapscans
+ SET enable_indexscan = OFF;
+ SELECT * FROM array_index_op_test WHERE i @> '{32}' ORDER BY seqno;
+ seqno | i | t
+ -------+---------------------------------+------------------------------------------------------------------------------------------------------------------------------------
+ 6 | {39,35,5,94,17,92,60,32} | {AAAAAAAAAAAAAAA35875,AAAAAAAAAAAAAAAA23657}
+ 74 | {32} | {AAAAAAAAAAAAAAAA1729,AAAAAAAAAAAAA22860,AAAAAA99807,AAAAA17383,AAAAAAAAAAAAAAA67062,AAAAAAAAAAA15165,AAAAAAAAAAA50956}
+ 77 | {97,15,32,17,55,59,18,37,50,39} | {AAAAAAAAAAAA67946,AAAAAA54032,AAAAAAAA81587,55847,AAAAAAAAAAAAAA28620,AAAAAAAAAAAAAAAAA43052,AAAAAA75463,AAAA49534,AAAAAAAA44066}
+ 89 | {40,32,17,6,30,88} | {AA44673,AAAAAAAAAAA6119,AAAAAAAAAAAAAAAA23657,AAAAAAAAAAAAAAAAAA47955,AAAAAAAAAAAAAAAA33598,AAAAAAAAAAA33576,AA44673}
+ 98 | {38,34,32,89} | {AAAAAAAAAAAAAAAAAA71621,AAAA8857,AAAAAAAAAAAAAAAAAAA65037,AAAAAAAAAAAAAAAA31334,AAAAAAAAAA48845}
+ 100 | {85,32,57,39,49,84,32,3,30} | {AAAAAAA80240,AAAAAAAAAAAAAAAA1729,AAAAA60038,AAAAAAAAAAA92631,AAAAAAAA9523}
+ (6 rows)
+
+ SELECT * FROM array_index_op_test WHERE i && '{32}' ORDER BY seqno;
+ seqno | i | t
+ -------+---------------------------------+------------------------------------------------------------------------------------------------------------------------------------
+ 6 | {39,35,5,94,17,92,60,32} | {AAAAAAAAAAAAAAA35875,AAAAAAAAAAAAAAAA23657}
+ 74 | {32} | {AAAAAAAAAAAAAAAA1729,AAAAAAAAAAAAA22860,AAAAAA99807,AAAAA17383,AAAAAAAAAAAAAAA67062,AAAAAAAAAAA15165,AAAAAAAAAAA50956}
+ 77 | {97,15,32,17,55,59,18,37,50,39} | {AAAAAAAAAAAA67946,AAAAAA54032,AAAAAAAA81587,55847,AAAAAAAAAAAAAA28620,AAAAAAAAAAAAAAAAA43052,AAAAAA75463,AAAA49534,AAAAAAAA44066}
+ 89 | {40,32,17,6,30,88} | {AA44673,AAAAAAAAAAA6119,AAAAAAAAAAAAAAAA23657,AAAAAAAAAAAAAAAAAA47955,AAAAAAAAAAAAAAAA33598,AAAAAAAAAAA33576,AA44673}
+ 98 | {38,34,32,89} | {AAAAAAAAAAAAAAAAAA71621,AAAA8857,AAAAAAAAAAAAAAAAAAA65037,AAAAAAAAAAAAAAAA31334,AAAAAAAAAA48845}
+ 100 | {85,32,57,39,49,84,32,3,30} | {AAAAAAA80240,AAAAAAAAAAAAAAAA1729,AAAAA60038,AAAAAAAAAAA92631,AAAAAAAA9523}
+ (6 rows)
+
+ SELECT * FROM array_index_op_test WHERE i @> '{17}' ORDER BY seqno;
+ seqno | i | t
+ -------+---------------------------------+------------------------------------------------------------------------------------------------------------------------------------
+ 6 | {39,35,5,94,17,92,60,32} | {AAAAAAAAAAAAAAA35875,AAAAAAAAAAAAAAAA23657}
+ 12 | {17,99,18,52,91,72,0,43,96,23} | {AAAAA33250,AAAAAAAAAAAAAAAAAAA85420,AAAAAAAAAAA33576}
+ 15 | {17,14,16,63,67} | {AA6416,AAAAAAAAAA646,AAAAA95309}
+ 19 | {52,82,17,74,23,46,69,51,75} | {AAAAAAAAAAAAA73084,AAAAA75968,AAAAAAAAAAAAAAAA14047,AAAAAAA80240,AAAAAAAAAAAAAAAAAAA1205,A68938}
+ 53 | {38,17} | {AAAAAAAAAAA21658}
+ 65 | {61,5,76,59,17} | {AAAAAA99807,AAAAA64741,AAAAAAAAAAA53908,AA21643,AAAAAAAAA10012}
+ 77 | {97,15,32,17,55,59,18,37,50,39} | {AAAAAAAAAAAA67946,AAAAAA54032,AAAAAAAA81587,55847,AAAAAAAAAAAAAA28620,AAAAAAAAAAAAAAAAA43052,AAAAAA75463,AAAA49534,AAAAAAAA44066}
+ 89 | {40,32,17,6,30,88} | {AA44673,AAAAAAAAAAA6119,AAAAAAAAAAAAAAAA23657,AAAAAAAAAAAAAAAAAA47955,AAAAAAAAAAAAAAAA33598,AAAAAAAAAAA33576,AA44673}
+ (8 rows)
+
+ SELECT * FROM array_index_op_test WHERE i && '{17}' ORDER BY seqno;
+ seqno | i | t
+ -------+---------------------------------+------------------------------------------------------------------------------------------------------------------------------------
+ 6 | {39,35,5,94,17,92,60,32} | {AAAAAAAAAAAAAAA35875,AAAAAAAAAAAAAAAA23657}
+ 12 | {17,99,18,52,91,72,0,43,96,23} | {AAAAA33250,AAAAAAAAAAAAAAAAAAA85420,AAAAAAAAAAA33576}
+ 15 | {17,14,16,63,67} | {AA6416,AAAAAAAAAA646,AAAAA95309}
+ 19 | {52,82,17,74,23,46,69,51,75} | {AAAAAAAAAAAAA73084,AAAAA75968,AAAAAAAAAAAAAAAA14047,AAAAAAA80240,AAAAAAAAAAAAAAAAAAA1205,A68938}
+ 53 | {38,17} | {AAAAAAAAAAA21658}
+ 65 | {61,5,76,59,17} | {AAAAAA99807,AAAAA64741,AAAAAAAAAAA53908,AA21643,AAAAAAAAA10012}
+ 77 | {97,15,32,17,55,59,18,37,50,39} | {AAAAAAAAAAAA67946,AAAAAA54032,AAAAAAAA81587,55847,AAAAAAAAAAAAAA28620,AAAAAAAAAAAAAAAAA43052,AAAAAA75463,AAAA49534,AAAAAAAA44066}
+ 89 | {40,32,17,6,30,88} | {AA44673,AAAAAAAAAAA6119,AAAAAAAAAAAAAAAA23657,AAAAAAAAAAAAAAAAAA47955,AAAAAAAAAAAAAAAA33598,AAAAAAAAAAA33576,AA44673}
+ (8 rows)
+
+ SELECT * FROM array_index_op_test WHERE i @> '{32,17}' ORDER BY seqno;
+ seqno | i | t
+ -------+---------------------------------+------------------------------------------------------------------------------------------------------------------------------------
+ 6 | {39,35,5,94,17,92,60,32} | {AAAAAAAAAAAAAAA35875,AAAAAAAAAAAAAAAA23657}
+ 77 | {97,15,32,17,55,59,18,37,50,39} | {AAAAAAAAAAAA67946,AAAAAA54032,AAAAAAAA81587,55847,AAAAAAAAAAAAAA28620,AAAAAAAAAAAAAAAAA43052,AAAAAA75463,AAAA49534,AAAAAAAA44066}
+ 89 | {40,32,17,6,30,88} | {AA44673,AAAAAAAAAAA6119,AAAAAAAAAAAAAAAA23657,AAAAAAAAAAAAAAAAAA47955,AAAAAAAAAAAAAAAA33598,AAAAAAAAAAA33576,AA44673}
+ (3 rows)
+
+ SELECT * FROM array_index_op_test WHERE i && '{32,17}' ORDER BY seqno;
+ seqno | i | t
+ -------+---------------------------------+------------------------------------------------------------------------------------------------------------------------------------
+ 6 | {39,35,5,94,17,92,60,32} | {AAAAAAAAAAAAAAA35875,AAAAAAAAAAAAAAAA23657}
+ 12 | {17,99,18,52,91,72,0,43,96,23} | {AAAAA33250,AAAAAAAAAAAAAAAAAAA85420,AAAAAAAAAAA33576}
+ 15 | {17,14,16,63,67} | {AA6416,AAAAAAAAAA646,AAAAA95309}
+ 19 | {52,82,17,74,23,46,69,51,75} | {AAAAAAAAAAAAA73084,AAAAA75968,AAAAAAAAAAAAAAAA14047,AAAAAAA80240,AAAAAAAAAAAAAAAAAAA1205,A68938}
+ 53 | {38,17} | {AAAAAAAAAAA21658}
+ 65 | {61,5,76,59,17} | {AAAAAA99807,AAAAA64741,AAAAAAAAAAA53908,AA21643,AAAAAAAAA10012}
+ 74 | {32} | {AAAAAAAAAAAAAAAA1729,AAAAAAAAAAAAA22860,AAAAAA99807,AAAAA17383,AAAAAAAAAAAAAAA67062,AAAAAAAAAAA15165,AAAAAAAAAAA50956}
+ 77 | {97,15,32,17,55,59,18,37,50,39} | {AAAAAAAAAAAA67946,AAAAAA54032,AAAAAAAA81587,55847,AAAAAAAAAAAAAA28620,AAAAAAAAAAAAAAAAA43052,AAAAAA75463,AAAA49534,AAAAAAAA44066}
+ 89 | {40,32,17,6,30,88} | {AA44673,AAAAAAAAAAA6119,AAAAAAAAAAAAAAAA23657,AAAAAAAAAAAAAAAAAA47955,AAAAAAAAAAAAAAAA33598,AAAAAAAAAAA33576,AA44673}
+ 98 | {38,34,32,89} | {AAAAAAAAAAAAAAAAAA71621,AAAA8857,AAAAAAAAAAAAAAAAAAA65037,AAAAAAAAAAAAAAAA31334,AAAAAAAAAA48845}
+ 100 | {85,32,57,39,49,84,32,3,30} | {AAAAAAA80240,AAAAAAAAAAAAAAAA1729,AAAAA60038,AAAAAAAAAAA92631,AAAAAAAA9523}
+ (11 rows)
+
+ SELECT * FROM array_index_op_test WHERE i <@ '{38,34,32,89}' ORDER BY seqno;
+ seqno | i | t
+ -------+---------------+----------------------------------------------------------------------------------------------------------------------------
+ 40 | {34} | {AAAAAAAAAAAAAA10611,AAAAAAAAAAAAAAAAAAA1205,AAAAAAAAAAA50956,AAAAAAAAAAAAAAAA31334,AAAAA70466,AAAAAAAA81587,AAAAAAA74623}
+ 74 | {32} | {AAAAAAAAAAAAAAAA1729,AAAAAAAAAAAAA22860,AAAAAA99807,AAAAA17383,AAAAAAAAAAAAAAA67062,AAAAAAAAAAA15165,AAAAAAAAAAA50956}
+ 98 | {38,34,32,89} | {AAAAAAAAAAAAAAAAAA71621,AAAA8857,AAAAAAAAAAAAAAAAAAA65037,AAAAAAAAAAAAAAAA31334,AAAAAAAAAA48845}
+ (3 rows)
+
+ SELECT * FROM array_index_op_test WHERE i = '{47,77}' ORDER BY seqno;
+ seqno | i | t
+ -------+---------+-----------------------------------------------------------------------------------------------------------------
+ 95 | {47,77} | {AAAAAAAAAAAAAAAAA764,AAAAAAAAAAA74076,AAAAAAAAAA18107,AAAAA40681,AAAAAAAAAAAAAAA35875,AAAAA60038,AAAAAAA56483}
+ (1 row)
+
RESET enable_seqscan;
RESET enable_indexscan;
RESET enable_bitmapscan;
Index: src/test/regress/expected/oidjoins.out
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/test/regress/expected/oidjoins.out,v
retrieving revision 1.19
diff -c -r1.19 oidjoins.out
*** src/test/regress/expected/oidjoins.out 30 Dec 2006 21:21:56 -0000 1.19
--- src/test/regress/expected/oidjoins.out 8 Mar 2007 20:13:31 -0000
***************
*** 65,76 ****
------+------------
(0 rows)
! SELECT ctid, amgetmulti
FROM pg_catalog.pg_am fk
! WHERE amgetmulti != 0 AND
! NOT EXISTS(SELECT 1 FROM pg_catalog.pg_proc pk WHERE pk.oid = fk.amgetmulti);
! ctid | amgetmulti
! ------+------------
(0 rows)
SELECT ctid, amrescan
--- 65,76 ----
------+------------
(0 rows)
! SELECT ctid, amgetbitmap
FROM pg_catalog.pg_am fk
! WHERE amgetbitmap != 0 AND
! NOT EXISTS(SELECT 1 FROM pg_catalog.pg_proc pk WHERE pk.oid = fk.amgetbitmap);
! ctid | amgetbitmap
! ------+-------------
(0 rows)
SELECT ctid, amrescan
Index: src/test/regress/sql/bitmapops.sql
===================================================================
RCS file: src/test/regress/sql/bitmapops.sql
diff -N src/test/regress/sql/bitmapops.sql
*** /dev/null 1 Jan 1970 00:00:00 -0000
--- src/test/regress/sql/bitmapops.sql 12 Mar 2007 11:53:01 -0000
***************
*** 0 ****
--- 1,41 ----
+ -- Test bitmap AND and OR
+
+
+ -- Generate enough data that we can test the lossy bitmaps.
+
+ -- There's 55 tuples per page in the table. 53 is just
+ -- below 55, so that an index scan with qual a = constant
+ -- will return at least one hit per page. 59 is just above
+ -- 55, so that an index scan with qual b = constant will return
+ -- hits on most but not all pages. 53 and 59 are prime, so that
+ -- there's a maximum number of a,b combinations in the table.
+ -- That allows us to test all the different combinations of
+ -- lossy and non-lossy pages with the minimum amount of data
+
+ CREATE TABLE bmscantest (a int, b int, t text);
+
+ INSERT INTO bmscantest
+ SELECT (r%53), (r%59), 'foooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo'
+ FROM generate_series(1,70000) r;
+
+ CREATE INDEX i_bmtest_a ON bmscantest(a);
+ CREATE INDEX i_bmtest_b ON bmscantest(b);
+
+ -- We want to use bitmapscans. With default settings, the planner currently
+ -- chooses a bitmap scan for the queries below anyway, but let's make sure.
+ set enable_indexscan=false;
+ set enable_seqscan=false;
+
+ -- Lower work_mem to trigger use of lossy bitmaps
+ set work_mem = 64;
+
+
+ -- Test bitmap-and.
+ SELECT count(*) FROM bmscantest WHERE a = 1 AND b = 1;
+
+ -- Test bitmap-or.
+ SELECT count(*) FROM bmscantest WHERE a = 1 OR b = 1;
+
+
+ -- clean up
+ DROP TABLE bmscantest;
Index: src/test/regress/sql/create_index.sql
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/test/regress/sql/create_index.sql,v
retrieving revision 1.22
diff -c -r1.22 create_index.sql
*** src/test/regress/sql/create_index.sql 9 Jan 2007 02:14:16 -0000 1.22
--- src/test/regress/sql/create_index.sql 12 Mar 2007 10:56:19 -0000
***************
*** 163,168 ****
--- 163,179 ----
SELECT * FROM array_index_op_test WHERE t <@ '{AAAAAAAA72908,AAAAAAAAAAAAAAAAAAA17075,AA88409,AAAAAAAAAAAAAAAAAA36842,AAAAAAA48038,AAAAAAAAAAAAAA10611}' ORDER BY seqno;
SELECT * FROM array_index_op_test WHERE t = '{AAAAAAAAAA646,A87088}' ORDER BY seqno;
+ -- Repeat some of the above tests but make sure we exercise bitmapscans
+ SET enable_indexscan = OFF;
+
+ SELECT * FROM array_index_op_test WHERE i @> '{32}' ORDER BY seqno;
+ SELECT * FROM array_index_op_test WHERE i && '{32}' ORDER BY seqno;
+ SELECT * FROM array_index_op_test WHERE i @> '{17}' ORDER BY seqno;
+ SELECT * FROM array_index_op_test WHERE i && '{17}' ORDER BY seqno;
+ SELECT * FROM array_index_op_test WHERE i @> '{32,17}' ORDER BY seqno;
+ SELECT * FROM array_index_op_test WHERE i && '{32,17}' ORDER BY seqno;
+ SELECT * FROM array_index_op_test WHERE i <@ '{38,34,32,89}' ORDER BY seqno;
+ SELECT * FROM array_index_op_test WHERE i = '{47,77}' ORDER BY seqno;
RESET enable_seqscan;
RESET enable_indexscan;
Index: src/test/regress/sql/oidjoins.sql
===================================================================
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/test/regress/sql/oidjoins.sql,v
retrieving revision 1.19
diff -c -r1.19 oidjoins.sql
*** src/test/regress/sql/oidjoins.sql 30 Dec 2006 21:21:56 -0000 1.19
--- src/test/regress/sql/oidjoins.sql 8 Mar 2007 20:13:13 -0000
***************
*** 33,42 ****
FROM pg_catalog.pg_am fk
WHERE amgettuple != 0 AND
NOT EXISTS(SELECT 1 FROM pg_catalog.pg_proc pk WHERE pk.oid = fk.amgettuple);
! SELECT ctid, amgetmulti
FROM pg_catalog.pg_am fk
! WHERE amgetmulti != 0 AND
! NOT EXISTS(SELECT 1 FROM pg_catalog.pg_proc pk WHERE pk.oid = fk.amgetmulti);
SELECT ctid, amrescan
FROM pg_catalog.pg_am fk
WHERE amrescan != 0 AND
--- 33,42 ----
FROM pg_catalog.pg_am fk
WHERE amgettuple != 0 AND
NOT EXISTS(SELECT 1 FROM pg_catalog.pg_proc pk WHERE pk.oid = fk.amgettuple);
! SELECT ctid, amgetbitmap
FROM pg_catalog.pg_am fk
! WHERE amgetbitmap != 0 AND
! NOT EXISTS(SELECT 1 FROM pg_catalog.pg_proc pk WHERE pk.oid = fk.amgetbitmap);
SELECT ctid, amrescan
FROM pg_catalog.pg_am fk
WHERE amrescan != 0 AND
Heikki Linnakangas <heikki@enterprisedb.com> writes:
The patch also adds support for candidate matches. An index scan can
indicate that the tuples it's returning are candidates, and the executor
will recheck the original scan quals of any candidate matches when the
tuple is fetched from heap.
This will not work, unless we change the planner --- the original quals
aren't necessarily there in some corner cases (partial indexes, if
memory serves).
The motivation for adding the support for candidate matches is that GIT
/ clustered indexes need it.
You need more than a vague reference to an unapplied patch to convince
me we ought to do this.
regards, tom lane
Tom Lane wrote:
Heikki Linnakangas <heikki@enterprisedb.com> writes:
The patch also adds support for candidate matches. An index scan can
indicate that the tuples it's returning are candidates, and the executor
will recheck the original scan quals of any candidate matches when the
tuple is fetched from heap.This will not work, unless we change the planner --- the original quals
aren't necessarily there in some corner cases (partial indexes, if
memory serves).
This is only for bitmap scans, which *do* always have the original quals
available in the executor (BitmapHeapScanState.bitmapqualorig).
That's because we have to recheck the original conditions when the
bitmap goes lossy.
To support candidate matches with the amgettuple API, that'll need to be
changed as well. And that will indeed involve more executor changes.
The motivation for adding the support for candidate matches is that GIT
/ clustered indexes need it.You need more than a vague reference to an unapplied patch to convince
me we ought to do this.
With the unapplied GIT patch, the index doesn't store the index key of
every tuple. That has the consequence that when scanning, we get a bunch
of tids to a heap page, we know that some of the might match, but we
don't know which ones until the tuples are fetched from heap.
In a more distant future, range-encoded bitmap indexes will also produce
candidate matches. And as I mentioned, this is immediately useful when
doing bitmap ANDs large enough to go lossy.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Heikki Linnakangas <heikki@enterprisedb.com> writes:
Tom Lane wrote:
This will not work, unless we change the planner --- the original quals
aren't necessarily there in some corner cases (partial indexes, if
memory serves).
This is only for bitmap scans, which *do* always have the original quals
available in the executor (BitmapHeapScanState.bitmapqualorig).
That's because we have to recheck the original conditions when the
bitmap goes lossy.
Yeah, but the index AM has to support regular indexscans too, and those
are not prepared for runtime lossiness determination; nor am I
particularly willing to add that.
With the unapplied GIT patch, the index doesn't store the index key of
every tuple.
I thought the design was to eliminate *duplicate* keys from the index.
Not to lose data.
regards, tom lane
Tom Lane wrote:
Heikki Linnakangas <heikki@enterprisedb.com> writes:
Tom Lane wrote:
This will not work, unless we change the planner --- the original quals
aren't necessarily there in some corner cases (partial indexes, if
memory serves).This is only for bitmap scans, which *do* always have the original quals
available in the executor (BitmapHeapScanState.bitmapqualorig).
That's because we have to recheck the original conditions when the
bitmap goes lossy.Yeah, but the index AM has to support regular indexscans too, and those
are not prepared for runtime lossiness determination; nor am I
particularly willing to add that.
Well, do you have an alternative suggestion?
With the unapplied GIT patch, the index doesn't store the index key of
every tuple.I thought the design was to eliminate *duplicate* keys from the index.
Not to lose data.
The idea *isn't* to deal efficiently with duplicate keys. The bitmap
indexam is better suited for that.
The idea really is to lose information from the leaf index pages, in
favor of a drastically smaller index. On a completely clustered table,
the heap effectively is the leaf level of the index.
I'm glad we're having this conversation now. I'd really appreciate
review of the design. I've been posting updates every now and then,
asking for comments, but never got any. If you have suggestions, I'm all
ears and I still have some time left before feature freeze to make changes.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Heikki Linnakangas <heikki@enterprisedb.com> writes:
With the unapplied GIT patch, the index doesn't store the index key of
every tuple.I thought the design was to eliminate *duplicate* keys from the index.
Not to lose data.
The idea really is to lose information from the leaf index pages, in
favor of a drastically smaller index. On a completely clustered table,
the heap effectively is the leaf level of the index.
I'm really dubious that this is an intelligent way to go. In the first
place, how will you keep the index sorted if you can't determine the
values of all the keys? It certainly seems that this would break the
ability to have a simple indexscan return sorted data, even if the index
itself doesn't get corrupted. In the second place, this seems to
forever kill the idea of indexscans that don't visit the heap --- not
that we have any near-term prospect of doing that, but I know a lot of
people remain interested in the idea.
The reason this catches me by surprise is that you've said several times
that you intended GIT to be something that could just be enabled
universally. If it's lossy then there's a much larger argument that not
everyone would want it.
regards, tom lane
Tom Lane wrote:
I'm really dubious that this is an intelligent way to go. In the first
place, how will you keep the index sorted if you can't determine the
values of all the keys? It certainly seems that this would break the
ability to have a simple indexscan return sorted data, even if the index
itself doesn't get corrupted.
That's indeed a very fundamental thing with the current design. The
index doesn't retain the complete order within heap pages. That
information is lost, again in favor of a smaller index size. It incurs a
significant CPU overhead, but on an I/O bound system that's a tradeoff
you want to make.
At the moment, I'm storing the offsets within the heap in a bitmap
attached to the index tuple. btgettuple fetches all the heap tuples
represented by the grouped index tuple, checks their visibility, sorts
them into index order, and returns them to the caller one at a time.
Thats ugly, API-wise, because it makes the indexam to actually go look
at the heap, which it shouldn't have to deal with.
Another approach I've been thinking of is to store a list of offsets, in
index order. That would avoid the problem of returning sorted data, and
reduce the CPU overhead incurred by sorting and scanning, at the cost of
much larger (but still much smaller than what we have now) index.
In the second place, this seems to
forever kill the idea of indexscans that don't visit the heap --- not
that we have any near-term prospect of doing that, but I know a lot of
people remain interested in the idea.
I'm certainly interested in that. It's not really needed for clustered
indexes, though. A well-clustered index is roughly one level shallower,
and the heap effectively is the leaf-level, therefore the amount of I/O
you need to fetch the index tuple + heap tuple, is roughly the same that
as fetching just the index tuple from a normal b-tree index.
On non-clustered indexes, index-only scans would of course still be useful.
The reason this catches me by surprise is that you've said several times
that you intended GIT to be something that could just be enabled
universally. If it's lossy then there's a much larger argument that not
everyone would want it.
Yeah, we can't just always enable it by default. While a clustered index
would degrade to a normal b-tree when the heap isn't clustered, you
would still not want to always enable the index clustering because of
the extra CPU overhead. That has become clear in the CPU bound tests
I've run.
I think we could still come up with some safe condiitions when we could
enable it by default, though. In particular, I've been thinking that if
you run CLUSTER on a table, you'd definitely want to use a clustered
index as well.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Heikki Linnakangas <heikki@enterprisedb.com> writes:
Tom Lane wrote:
In the second place, this seems to
forever kill the idea of indexscans that don't visit the heap --- not
that we have any near-term prospect of doing that, but I know a lot of
people remain interested in the idea.
I'm certainly interested in that. It's not really needed for clustered
indexes, though. A well-clustered index is roughly one level shallower,
and the heap effectively is the leaf-level, therefore the amount of I/O
you need to fetch the index tuple + heap tuple, is roughly the same that
as fetching just the index tuple from a normal b-tree index.
That argument ignores the fact that the heap entries are likely to be
much wider than the index entries, due to having other columns in them.
I think we could still come up with some safe condiitions when we could
enable it by default, though.
At this point I'm feeling unconvinced that we want it at all. It's
sounding like a large increase in complexity (both implementation-wise
and in terms of API ugliness) for a fairly narrow use-case --- just how
much territory is going to be left for this between HOT and bitmap indexes?
I particularly dislike the idea of having the index AM reaching directly
into the heap --- we should be trying to get rid of that, not add more
cases.
regards, tom lane
Tom Lane wrote:
Heikki Linnakangas <heikki@enterprisedb.com> writes:
Tom Lane wrote:
In the second place, this seems to
forever kill the idea of indexscans that don't visit the heap --- not
that we have any near-term prospect of doing that, but I know a lot of
people remain interested in the idea.I'm certainly interested in that. It's not really needed for clustered
indexes, though. A well-clustered index is roughly one level shallower,
and the heap effectively is the leaf-level, therefore the amount of I/O
you need to fetch the index tuple + heap tuple, is roughly the same that
as fetching just the index tuple from a normal b-tree index.That argument ignores the fact that the heap entries are likely to be
much wider than the index entries, due to having other columns in them.
True, that's the "roughly" part. It does indeed depend on your schema.
As a data point, here's the index sizes (in pages) of a 140 warehouse
TPC-C database:
index name normal grouped % of normal size
--------------------------------------
i_customer 31984 29250 91.5%
i_orders 11519 11386 98.8%
pk_customer 11519 1346 11.6%
pk_district 6 2
pk_item 276 10 3.6%
pk_new_order 3458 42 1.2%
pk_order_line 153632 2993 1.9%
pk_orders 11519 191 1.7%
pk_stock 38389 2815 7.3%
pk_warehouse 8 2
The customer table is an example of pretty wide table, there's only ~12
tuples per page. pk_customer is still benefiting a lot. i_customer and
i_orders are not benefiting because the tables are not in the index
order. The orders-related indexes are seeing the most benefit, they
don't have many columns.
I think we could still come up with some safe condiitions when we could
enable it by default, though.At this point I'm feeling unconvinced that we want it at all. It's
sounding like a large increase in complexity (both implementation-wise
and in terms of API ugliness) for a fairly narrow use-case --- just how
much territory is going to be left for this between HOT and bitmap indexes?
I don't see how HOT is overlapping with clustered indexes. On the
contrary, it makes clustered indexes work better, because it reduces the
amount of index inserts needed and helps to keep a table clustered.
The use cases for bitmap indexes and clustered indexes do overlap
somewhat. But clustered indexes have an edge because:
- there's no requirement of having only a small number of distinct values
- they support uniqueness checks
- you can efficiently have a mixture of grouped and non-grouped tuples,
if your table is only partly clustered
In general, clustered indexes are more suited for OLTP work than bitmap
indexes.
I particularly dislike the idea of having the index AM reaching directly
into the heap --- we should be trying to get rid of that, not add more
cases.
I agree. The right way would be to add support for partial ordering and
candidate matches to the indexam API, and move all the sorting etc.
ugliness out of the indexam. That's been on my TODO since the beginning.
If you're still not convinced that we want this at all, how would you
feel about the another approach I described? The one where the
in-heap-page order is stored in the index tuples, so there's no need for
sorting, at the cost of losing part of the I/O benefit.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Heikki Linnakangas wrote:
Tom Lane wrote:
Heikki Linnakangas <heikki@enterprisedb.com> writes:
Tom Lane wrote:
In the second place, this seems to
forever kill the idea of indexscans that don't visit the heap --- not
that we have any near-term prospect of doing that, but I know a lot of
people remain interested in the idea.I'm certainly interested in that. It's not really needed for
clustered indexes, though. A well-clustered index is roughly one
level shallower, and the heap effectively is the leaf-level,
therefore the amount of I/O you need to fetch the index tuple + heap
tuple, is roughly the same that as fetching just the index tuple from
a normal b-tree index.That argument ignores the fact that the heap entries are likely to be
much wider than the index entries, due to having other columns in them.True, that's the "roughly" part. It does indeed depend on your schema.
As a data point, here's the index sizes (in pages) of a 140 warehouse
TPC-C database:
Ah, I see now that you didn't (necessarily) mean that the clustering
becomes inefficient at reducing the index size on wider tables, but that
there's much more heap pages than leaf pages in a normal index. That's
true, you might not want to use clustered index in that case, to allow
index-only scans. If we had that feature, that is.
Often, though, when using index-only scans, columns are added to the
index to allow them to be returned in an index-only scans. That narrows
the gap a bit.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Mon, 2007-03-12 at 13:56 -0400, Tom Lane wrote:
At this point I'm feeling unconvinced that we want it at all. It's
sounding like a large increase in complexity (both implementation-wise
and in terms of API ugliness) for a fairly narrow use-case --- just
how much territory is going to be left for this between HOT and bitmap
indexes?
HOT and clustered indexes have considerable synergy. In many tests we've
got +20% performance with them acting together. Neither one achieves
this performance on their own, but together they work very well.
There is an overlap between clustered and bitmap indexes, but they come
at the problem from different ends of the scale. Bitmap indexes are
designed to cope well with highly non-unique data, while clustered
indexes optimise for unique or somewhat unique keys. The difference is
really bitmap for DW and clustered indexes for OLTP.
The ideas for bitmap indexes come from research and other RDBMS
implementations. Clustered indexes have also got external analogs - the
concepts are very similar to SQLServer Clustered Indexes and Teradata
Primary Indexes (Block Index structure), as well as being reasonably
close to Oracle's Index Organised Tables.
Clustered indexes offer a way to reduce index size to 1-5% of normal
b-tree sizes, yet still maintaining uniqueness checking capability. For
VLDB, that is a win for either OLTP or DW - think about a 1 TB index
coming down to 10-50 GB in size. The benefit is significant for most
tables over a ~1 GB in size through I/O reduction on leaf pages.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
Simon Riggs wrote:
On Mon, 2007-03-12 at 13:56 -0400, Tom Lane wrote:
At this point I'm feeling unconvinced that we want it at all. It's
sounding like a large increase in complexity (both implementation-wise
and in terms of API ugliness) for a fairly narrow use-case --- just
how much territory is going to be left for this between HOT and bitmap
indexes?HOT and clustered indexes have considerable synergy. In many tests we've
got +20% performance with them acting together. Neither one achieves
this performance on their own, but together they work very well.
To clarify, Simon is talking about DBT-2 tests we run in November.
Clustered indexes don't require HOT per se, but on TPC-C the performance
benefit comes from reducing the amount of I/O on the stock table and
index, and that's a table that gets updated at a steady rate. Without
HOT, the updates will disorganize the table and the performance gain you
get from clustered indexes vanishes after a while.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Mon, 12 Mar 2007, Heikki Linnakangas wrote:
Here's a patch to change the amgetmulti API so that it's called only
once per scan, and the indexam adds *all* matching tuples at once to a
caller-supplied TIDBitmap. Per Tom's proposal in July 2006:
http://archives.postgresql.org/pgsql-hackers/2006-07/msg01233.php
I incorporated something like your change to gistnext(). This is much
better, for the reason Teodor mentions up thread.
The return type of gistnext() is int and it is possible that it could
overflow (on some platforms) now that there is no max_tids.
Thanks,
Gavin
Tom Lane wrote:
At this point I'm feeling unconvinced that we want it at all. It's
sounding like a large increase in complexity (both implementation-wise
and in terms of API ugliness) for a fairly narrow use-case --- just how
much territory is going to be left for this between HOT and bitmap indexes?
I'm in a awkward situation right now. I've done my best to describe the
use cases for clustered indexes. I know the patch needs refactoring,
I've refrained from making API changes and tried to keep all the
ugliness inside the b-tree, knowing that there's changes to the indexam
API coming from the bitmap index patch as well.
I've been seeking for comments on the design since November, knowing
that this is a non-trivial change. I have not wanted to spend too much
time polishing the patch, in case I need to rewrite it from scratch
because of some major design flaw or because someone comes up with a
much better idea.
It's frustrating to have the patch dismissed at this late stage on the
grounds of "it's not worth it". As I said in February, I have the time
to work on this, but if major changes are required to the current
design, I need to know.
Just to recap the general idea: reduce index size taking advantage of
clustering in the heap.
Clustered indexes have roughly the same performance effect and use cases
as clustered indexes on MS SQL Server, and Index-Organized-Tables on
Oracle, but the way I've implemented them is significantly different. On
other DBMSs, the index and heap are combined to a single b-tree
structure. The way I've implemented them is less invasive, there's no
changes to the heap for example, and it doesn't require moving live tuples.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Ühel kenal päeval, K, 2007-03-14 kell 10:22, kirjutas Heikki
Linnakangas:
Tom Lane wrote:
At this point I'm feeling unconvinced that we want it at all. It's
sounding like a large increase in complexity (both implementation-wise
and in terms of API ugliness) for a fairly narrow use-case --- just how
much territory is going to be left for this between HOT and bitmap indexes?I'm in a awkward situation right now. I've done my best to describe the
use cases for clustered indexes.
...
Just to recap the general idea: reduce index size taking advantage of
clustering in the heap.Clustered indexes have roughly the same performance effect and use cases
as clustered indexes on MS SQL Server, and Index-Organized-Tables on
Oracle, but the way I've implemented them is significantly different. On
other DBMSs, the index and heap are combined to a single b-tree
structure. The way I've implemented them is less invasive, there's no
changes to the heap for example, and it doesn't require moving live tuples.
Do you keep visibility info in the index ?
How does this info get updated when visibility data changes in the
heap ?
If there is no visibility data in index, then I can't see, how it gets
the same performance effect as Index-Organized-Tables, as lot of random
heap access is still needed.
--
----------------
Hannu Krosing
Database Architect
Skype Technologies OÜ
Akadeemia tee 21 F, Tallinn, 12618, Estonia
Skype me: callto:hkrosing
Get Skype for free: http://www.skype.com
Hannu Krosing wrote:
Ühel kenal päeval, K, 2007-03-14 kell 10:22, kirjutas Heikki
Linnakangas:Tom Lane wrote:
At this point I'm feeling unconvinced that we want it at all. It's
sounding like a large increase in complexity (both implementation-wise
and in terms of API ugliness) for a fairly narrow use-case --- just how
much territory is going to be left for this between HOT and bitmap indexes?I'm in a awkward situation right now. I've done my best to describe the
use cases for clustered indexes....
Just to recap the general idea: reduce index size taking advantage of
clustering in the heap.
This is what I suggest.
Provide a tarball of -head with the patch applied.
Provide a couple of use cases that can be run with explanation of how to
verify the use cases.
Allow the community to drive the inclusion by making it as easy as
possible to allow a proactive argument to take place by the people
actually using the product.
Proving that a user could and would use the feature is something that is
a very powerful argument.
Sincerely,
Joshua D. Drake
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since 1997
http://www.commandprompt.com/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/
Joshua D. Drake wrote:
Allow the community to drive the inclusion by making it as easy as
possible to allow a proactive argument to take place by the people
actually using the product.
This seems to be a rather poor decision making process: "Are the users
happy with the new feature? If so, then apply the patch." It leads to
unmanageable code.
Which is why we don't do things that way. The code must fit within the
general architecture before application -- particularly if it's an
internal API change. That's what the review process is for.
--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
Alvaro Herrera wrote:
Joshua D. Drake wrote:
Allow the community to drive the inclusion by making it as easy as
possible to allow a proactive argument to take place by the people
actually using the product.This seems to be a rather poor decision making process: "Are the users
happy with the new feature? If so, then apply the patch." It leads to
unmanageable code.
Perhaps reading my message again is in order. I think it is pretty
obvious that the a user shouldn't determine if a patch should be applied.
My whole point was that if people are clamoring for the feature, it
could drive that feature to be more aggressively reviewed.
I can't even count how many times I see:
This seems like a corner case feature, I don't think we should add it.
So I am suggesting a way to insure that the feature is not considered
corner case. (if it is indeed not a corner case)
Sincerely,
Joshua D. Drake
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since 1997
http://www.commandprompt.com/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/
Hannu Krosing wrote:
Ühel kenal päeval, K, 2007-03-14 kell 10:22, kirjutas Heikki
Linnakangas:Clustered indexes have roughly the same performance effect and use cases
as clustered indexes on MS SQL Server, and Index-Organized-Tables on
Oracle, but the way I've implemented them is significantly different. On
other DBMSs, the index and heap are combined to a single b-tree
structure. The way I've implemented them is less invasive, there's no
changes to the heap for example, and it doesn't require moving live tuples.Do you keep visibility info in the index ?
No.
If there is no visibility data in index, then I can't see, how it gets
the same performance effect as Index-Organized-Tables, as lot of random
heap access is still needed.
Let me illustrate the effect in the best case, with a table that
consists of just the key:
Normal b-tree:
Root -> leaf -> heap
aaa -> aaa -> aaa
bbb -> bbb
ccc -> ccc
ddd -> ddd -> ddd
eee -> eee
fff -> fff
ggg -> ggg -> ggg
hhh -> hhh
iii -> iii
Clustered b-tree:
Root -> heap
aaa -> aaa
bbb
ccc
ddd -> ddd
eee
fff
ggg -> ggg
hhh
iii
The index is much smaller, one level shallower in the best case. A
smaller index means that more of it fits in cache. If you're doing
random access through the index, that means that you need to do less I/O
because you don't need to fetch so many index pages. You need to access
the heap anyway for the visibility information, as you pointed out, but
the savings are coming from having to do less index I/O.
How close to the best case do you get in practice? It depends on your
schema, narrow tables or tables with wide keys gain the most, and on the
clusteredness of the table.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Alvaro Herrera wrote:
Which is why we don't do things that way. The code must fit within the
general architecture before application -- particularly if it's an
internal API change. That's what the review process is for.
Yes, of course. As I've said, I have the time to work on this, but I
need get the review process *started*. Otherwise I'll just tweak and
polish the patch for weeks, and end up with something that gets rejected
in the end anyway.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Joshua D. Drake wrote:
This is what I suggest.
Provide a tarball of -head with the patch applied.
Here you are:
http://community.enterprisedb.com/git/pgsql-git-20070315.tar.gz
Provide a couple of use cases that can be run with explanation of how to
verify the use cases.
There's a number of simple test cases on the web page that I've used
(perfunittests). I can try to simplify them and add explanations.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Heikki Linnakangas wrote:
Joshua D. Drake wrote:
This is what I suggest.
Provide a tarball of -head with the patch applied.
Here you are:
http://community.enterprisedb.com/git/pgsql-git-20070315.tar.gz
Provide a couple of use cases that can be run with explanation of how to
verify the use cases.There's a number of simple test cases on the web page that I've used
(perfunittests). I can try to simplify them and add explanations.
I am downloading now.
Joshua D. Drake
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since 1997
http://www.commandprompt.com/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/
Heikki Linnakangas wrote:
Joshua D. Drake wrote:
This is what I suggest.
Provide a tarball of -head with the patch applied.
Here you are:
http://community.enterprisedb.com/git/pgsql-git-20070315.tar.gz
Provide a couple of use cases that can be run with explanation of how to
verify the use cases.There's a number of simple test cases on the web page that I've used
(perfunittests). I can try to simplify them and add explanations.
O.k. maybe I am the only one, but I actually dug the archives for what
website you were talking about and then said, "Aha!, he means:
http://community.enterprisedb.com/git/".
So I will accept my own paperbag, and hopefully save some from the same
fate by posted the above link.
Joshua D. Drake
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since 1997
http://www.commandprompt.com/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/
Heikki Linnakangas wrote:
Joshua D. Drake wrote:
This is what I suggest.
Provide a tarball of -head with the patch applied.
Here you are:
http://community.enterprisedb.com/git/pgsql-git-20070315.tar.gz
Provide a couple of use cases that can be run with explanation of how to
verify the use cases.There's a number of simple test cases on the web page that I've used
(perfunittests). I can try to simplify them and add explanations.
This URL is not working:
http://community.enterprisedb.com/git/git-perfunittests-20070222.tar.gz
File not found.
Sincerely,
Joshua D. Drake
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since 1997
http://www.commandprompt.com/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/
Joshua D. Drake wrote:
This URL is not working:
http://community.enterprisedb.com/git/git-perfunittests-20070222.tar.gz
Sorry about that, typo in the filename. Fixed.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Heikki Linnakangas wrote:
Joshua D. Drake wrote:
This URL is not working:
http://community.enterprisedb.com/git/git-perfunittests-20070222.tar.gz
Sorry about that, typo in the filename. Fixed.
Here are my results on a modest 3800X2 2 Gig of ram, RAID 1 dual SATA
http://pgsql.privatepaste.com/170yD8c0gr
Sincerely,
Joshua D. Drake
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since 1997
http://www.commandprompt.com/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/
Joshua D. Drake wrote:
Heikki Linnakangas wrote:
Joshua D. Drake wrote:
This URL is not working:
http://community.enterprisedb.com/git/git-perfunittests-20070222.tar.gz
Sorry about that, typo in the filename. Fixed.
Here are my results on a modest 3800X2 2 Gig of ram, RAID 1 dual SATA
Thanks for looking into this, though that test alone doesn't really tell
us anything. You'd have to run the same tests with and without clustered
indexes enabled, and compare. With the default settings the test data
fits in memory anyway, so you're not seeing the I/O benefit but only the
CPU overhead.
Attached is a larger test case with a data set of > 2 GB. Run the
git_demo_init.sql first to create tables and indexes, and
git_demo_run.sql to perform selects on them. The test runs for quite a
long time, depending on your hardware, and print the time spent on the
selects, with and without clustered index.
You'll obviously need to run it with the patch applied. I'd suggest to
enable stats_block_level to see the effect on buffer cache hit/miss ratio.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Attachments:
Heikki Linnakangas wrote:
Joshua D. Drake wrote:
Heikki Linnakangas wrote:
Joshua D. Drake wrote:
This URL is not working:
http://community.enterprisedb.com/git/git-perfunittests-20070222.tar.gz
Sorry about that, typo in the filename. Fixed.
Here are my results on a modest 3800X2 2 Gig of ram, RAID 1 dual SATA
heap_pages | normal_index_pages | clustered_index_pages
------------+--------------------+-----------------------
216217 | 109679 | 1316
select_with_normal_index
--------------------------
100000
(1 row)
Time: 1356524.743 ms
select_with_normal_index
--------------------------
100000
(1 row)
Time: 1144832.597 ms
select_with_normal_index
--------------------------
100000
(1 row)
Time: 1111445.236 ms
And now run the same tests with clustered index
Timing is on.
select_with_clustered_index
-----------------------------
100000
(1 row)
Time: 815622.768 ms
select_with_clustered_index
-----------------------------
100000
(1 row)
Time: 535749.457 ms
select_with_clustered_index
-----------------------------
100000
(1 row)
select relname,indexrelname,idx_blks_read,idx_blks_hit from
pg_statio_all_indexes where schemaname = 'public';
relname | indexrelname | idx_blks_read | idx_blks_hit
--------------+------------------------------+---------------+--------------
narrowtable | narrowtable_index | 296973 | 904654
narrowtable2 | narrowtable2_clustered_index | 44556 | 857269
(2 rows)
select relname,heap_blks_read,heap_blks_hit,idx_blks_read,idx_blks_hit
from pg_statio_user_tables ;
relname | heap_blks_read | heap_blks_hit | idx_blks_read |
idx_blks_hit
--------------+----------------+---------------+---------------+--------------
narrowtable2 | 734312 | 40304136 | 44556 |
857269
narrowtable | 952044 | 40002609 | 296973 |
904654
Seems like a clear win to me. Anyone else want to try?
Sincerely,
Joshua D. Drake
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since 1997
http://www.commandprompt.com/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/
On Mar 16, 2007, at 10:12 PM, Heikki Linnakangas wrote:
You'll obviously need to run it with the patch applied. I'd suggest
to enable stats_block_level to see the effect on buffer cache hit/
miss ratio.
groupeditems-42-pghead.patch.gz is enough, or it needs
maintain_cluster_order_v5.patch ??
--
Grzegorz Jaskiewicz
C/C++ freelance for hire
Grzegorz Jaskiewicz wrote:
On Mar 16, 2007, at 10:12 PM, Heikki Linnakangas wrote:
You'll obviously need to run it with the patch applied. I'd suggest to
enable stats_block_level to see the effect on buffer cache hit/miss
ratio.groupeditems-42-pghead.patch.gz is enough, or it needs
maintain_cluster_order_v5.patch ??
He has a patched source ball here of the whole thing, which is what I used:
http://community.enterprisedb.com/git/pgsql-git-20070315.tar.gz
The you just need to run the tests.
--Grzegorz Jaskiewicz
C/C++ freelance for hire
---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since 1997
http://www.commandprompt.com/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/
This is on dual ultra 2 sparc. with ultrawide 320 scsi drives. 512MB
ram.
I had to drop size of DB, because the DB drive is 4GB (I do welecome
bigger drives as donation, if someone asks - UWscsi 320).
here are my results. With only 4.2 patch (no maintain cluster order
v5 patch). If the v5 patch was needed, please tell me - I am going
rerun it with.
hope it is usefull.
Repeat 3 times to ensure repeatable results.
Timing is on.
select_with_normal_index
--------------------------
100000
(1 row)
Time: 1727891.334 ms
select_with_normal_index
--------------------------
100000
(1 row)
Time: 1325561.252 ms
select_with_normal_index
--------------------------
100000
(1 row)
Time: 1348530.100 ms
Timing is off.
And now run the same tests with clustered index
Timing is on.
select_with_clustered_index
-----------------------------
100000
(1 row)
Time: 870246.856 ms
select_with_clustered_index
-----------------------------
100000
(1 row)
Time: 477089.456 ms
select_with_clustered_index
-----------------------------
100000
(1 row)
Time: 381880.965 ms
Timing is off.
Wow, nice!
Can you tell us:
- how big is the table
- cardinality of the column
- how big is the index in each case
- how much memory on the machine
- query and explain analyze
Thanks!
- Luke
Msg is shrt cuz m on ma treo
-----Original Message-----
From: Grzegorz Jaskiewicz [mailto:gj@pointblue.com.pl]
Sent: Saturday, March 17, 2007 05:16 PM Eastern Standard Time
To: Joshua D.Drake
Cc: Heikki Linnakangas; PostgreSQL-development Hackers
Subject: Re: [HACKERS] [PATCHES] Bitmapscan changes
This is on dual ultra 2 sparc. with ultrawide 320 scsi drives. 512MB
ram.
I had to drop size of DB, because the DB drive is 4GB (I do welecome
bigger drives as donation, if someone asks - UWscsi 320).
here are my results. With only 4.2 patch (no maintain cluster order
v5 patch). If the v5 patch was needed, please tell me - I am going
rerun it with.
hope it is usefull.
Repeat 3 times to ensure repeatable results.
Timing is on.
select_with_normal_index
--------------------------
100000
(1 row)
Time: 1727891.334 ms
select_with_normal_index
--------------------------
100000
(1 row)
Time: 1325561.252 ms
select_with_normal_index
--------------------------
100000
(1 row)
Time: 1348530.100 ms
Timing is off.
And now run the same tests with clustered index
Timing is on.
select_with_clustered_index
-----------------------------
100000
(1 row)
Time: 870246.856 ms
select_with_clustered_index
-----------------------------
100000
(1 row)
Time: 477089.456 ms
select_with_clustered_index
-----------------------------
100000
(1 row)
Time: 381880.965 ms
Timing is off.
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
Import Notes
Resolved by subject fallback
On Mar 17, 2007, at 10:33 PM, Luke Lonergan wrote:
Wow, nice!
Can you tell us:
- how big is the table
- cardinality of the column
- how big is the index in each case
- how much memory on the machine
- query and explain analyze
All I changed, was the 400k to 150k
512MB of ram, as I said earlier. And it is running 64bit kernel,
32bit user-land on linux 2.6.20
query and explain is going to run for a while, so I'll leave it - as
it is going to be the same on other machines (much faster ones).
postgres=# select pg_size_pretty( pg_relation_size
( 'narrowtable_index' ) );
pg_size_pretty
----------------
321 MB
(1 row)
postgres=# select pg_size_pretty( pg_relation_size
( 'narrowtable2_clustered_index' ) );
pg_size_pretty
----------------
3960 kB
(1 row)
(so there's quite a difference).
Judging from noises coming out of machine, there was pretty loads of
I/O activity. and funny enough, one CPU was stucked on 'wait' up to
80% most of the time.
the 'cardinality', as I guess, uniqueness is the same as intended in
original test. Like I said, only table size was changed.
select count(distinct key) from narrowtable; and select count(*) from
narrowtable; are the same - 15000000
hth.
--
Grzegorz Jaskiewicz
C/C++ freelance for hire
Grzegorz Jaskiewicz wrote:
On Mar 16, 2007, at 10:12 PM, Heikki Linnakangas wrote:
You'll obviously need to run it with the patch applied. I'd suggest to
enable stats_block_level to see the effect on buffer cache hit/miss
ratio.groupeditems-42-pghead.patch.gz is enough, or it needs
maintain_cluster_order_v5.patch ??
No, it won't make a difference unless you're inserting to the table, and
the inserts are not in cluster order.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Hackers et al... I was wondering if there are any outstanding issues
that need to be resolved in terms of the clustered index/bitmap changes?
From the testing that I have done, plus a couple of others it is a net
win (at least from DBA space).
Sincerely,
Joshua D. Drake
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since 1997
http://www.commandprompt.com/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/
On Tue, 20 Mar 2007, Joshua D. Drake wrote:
Hackers et al... I was wondering if there are any outstanding issues
that need to be resolved in terms of the clustered index/bitmap changes?From the testing that I have done, plus a couple of others it is a net
win (at least from DBA space).
Not sure if you're talking about bitmap indexes here. If so, I'm working
on VACUUM support.
Gavin
Gavin Sherry wrote:
On Tue, 20 Mar 2007, Joshua D. Drake wrote:
Hackers et al... I was wondering if there are any outstanding issues
that need to be resolved in terms of the clustered index/bitmap changes?From the testing that I have done, plus a couple of others it is a net
win (at least from DBA space).
Not sure if you're talking about bitmap indexes here. If so, I'm working
on VACUUM support.
I was talking about the patch for Clustered indexes and I realize now I
might have used the wrong thread. ;
Joshua D. Drake
Gavin
---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since 1997
http://www.commandprompt.com/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/
Joshua D. Drake wrote:
Hackers et al... I was wondering if there are any outstanding issues
that need to be resolved in terms of the clustered index/bitmap changes?
I have a todo list of smaller items for clustered indexes, but the main
design issues at the moment are:
1. How to handle sorting tuples in a scan, or should we choose a design
that doesn't require it?
Should we add support for sorting tuples in scans on the fly, which
gives more space savings when there's updates, and would also be useful
in the future to support binned bitmap indexes?
Or should we only form groups from tuples that are completely in order
on page-level? That makes a clustered index to lose its space savings
quicker, when tuples are updated. HOT reduces that affect, though. This
approach would also reduce the CPU overhead of scans, because we could
do binary searches within groups.
At the moment, I'm leaning towards the latter approach. What do others
think?
2. Clustered indexes need the support for candidate-matches. That needs
to be added to the amgetmulti and amgettuple interfaces. I've sent a
patch for amgetmulti, and a proposal for the amgettuple.
3. Clustered index needs to reach out to the heap for some operations,
like uniqueness checks do today, blurring the modularity between heap
and index. Are we willing to live with that? Is there something we can
do to make it less ugly?
I'd like to get some kind of confirmation first that 1 and 3 are not
showstoppers, to avoid wasting time on a patch that'll just get rejected
in the end, and then submit a patch for 2, and have that committed
before the main patch.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Mar 19, 2007, at 11:16 AM, Heikki Linnakangas wrote:
Grzegorz Jaskiewicz wrote:
On Mar 16, 2007, at 10:12 PM, Heikki Linnakangas wrote:
You'll obviously need to run it with the patch applied. I'd
suggest to enable stats_block_level to see the effect on buffer
cache hit/miss ratio.groupeditems-42-pghead.patch.gz is enough, or it needs
maintain_cluster_order_v5.patch ??No, it won't make a difference unless you're inserting to the
table, and the inserts are not in cluster order.
well, that's okay than. I see really good improvement in terms of
speed and db size (which reflects obviously in i/o performance).
Let me know if further testing can be done. I would happily see it in
mainline.
--
Grzegorz Jaskiewicz
C/C++ freelance for hire
Grzegorz Jaskiewicz wrote:
On Mar 19, 2007, at 11:16 AM, Heikki Linnakangas wrote:
Grzegorz Jaskiewicz wrote:
On Mar 16, 2007, at 10:12 PM, Heikki Linnakangas wrote:
You'll obviously need to run it with the patch applied. I'd suggest
to enable stats_block_level to see the effect on buffer cache
hit/miss ratio.groupeditems-42-pghead.patch.gz is enough, or it needs
maintain_cluster_order_v5.patch ??No, it won't make a difference unless you're inserting to the table,
and the inserts are not in cluster order.well, that's okay than. I see really good improvement in terms of speed
and db size (which reflects obviously in i/o performance).
Let me know if further testing can be done. I would happily see it in
mainline.
Right. My understanding is that the clustered index will gradually
degrade to a normal btree, is that correct heikki?
We could of course resolve this by doing a reindex.
The other item I think this would be great for is fairly static tables.
Think about tables that are children of partitions that haven't been
touched in 6 months. Why are we wasting space with them?
Anyway, from a "feature" perspective I can't see any negative. I can not
speak from a code injection (into core) perspective.
Joshua D. Drake
--Grzegorz Jaskiewicz
C/C++ freelance for hire
---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since 1997
http://www.commandprompt.com/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/
Grzegorz Jaskiewicz wrote:
On Mar 19, 2007, at 11:16 AM, Heikki Linnakangas wrote:
Grzegorz Jaskiewicz wrote:
On Mar 16, 2007, at 10:12 PM, Heikki Linnakangas wrote:
You'll obviously need to run it with the patch applied. I'd suggest
to enable stats_block_level to see the effect on buffer cache
hit/miss ratio.groupeditems-42-pghead.patch.gz is enough, or it needs
maintain_cluster_order_v5.patch ??No, it won't make a difference unless you're inserting to the table,
and the inserts are not in cluster order.well, that's okay than. I see really good improvement in terms of speed
and db size (which reflects obviously in i/o performance).
Let me know if further testing can be done. I would happily see it in
mainline.
If you have a real-world database you could try it with, that would be
nice. The test I sent you is pretty much a best-case scenario, it'd be
interesting to get anecdotal evidence of improvements in real applications.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Joshua D. Drake wrote:
Right. My understanding is that the clustered index will gradually
degrade to a normal btree, is that correct heikki?
That's right.
We could of course resolve this by doing a reindex.
Not reindex, but cluster. How clustered the index can be depends on the
clusteredness of the heap.
The other item I think this would be great for is fairly static tables.
Think about tables that are children of partitions that haven't been
touched in 6 months. Why are we wasting space with them?
By touched, you mean updated, right? Yes, it's particularly suitable for
static tables, since once you cluster them, they stay clustered.
Log-tables that are only inserted to, in monotonically increasing key
order, also stay clustered naturally.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Mar 21, 2007, at 5:22 PM, Heikki Linnakangas wrote:
Grzegorz Jaskiewicz wrote:
On Mar 19, 2007, at 11:16 AM, Heikki Linnakangas wrote:
Grzegorz Jaskiewicz wrote:
On Mar 16, 2007, at 10:12 PM, Heikki Linnakangas wrote:
You'll obviously need to run it with the patch applied. I'd
suggest to enable stats_block_level to see the effect on buffer
cache hit/miss ratio.groupeditems-42-pghead.patch.gz is enough, or it needs
maintain_cluster_order_v5.patch ??No, it won't make a difference unless you're inserting to the
table, and the inserts are not in cluster order.well, that's okay than. I see really good improvement in terms of
speed and db size (which reflects obviously in i/o performance).
Let me know if further testing can be done. I would happily see it
in mainline.If you have a real-world database you could try it with, that would
be nice. The test I sent you is pretty much a best-case scenario,
it'd be interesting to get anecdotal evidence of improvements in
real applications.
Sure, I'll check it with my network statistics thingie. 30GB db atm,
with milions of rows. (traffic analysies for wide network , ethernet
level, from/to/protocol/size kinda of thing). Loads of updates on 2
tables (that's where I also see HOT would benefit me).
--
Grzegorz Jaskiewicz
C/C++ freelance for hire
any idea how this patch is going to play with hot ? or should I just
give it a spin, and see if my world collapses :D
--
Grzegorz Jaskiewicz
C/C++ freelance for hire
Grzegorz Jaskiewicz wrote:
any idea how this patch is going to play with hot ? or should I just
give it a spin, and see if my world collapses :D
I've run tests with both patches applied. I haven't tried with the
latest HOT-versions, but they should in theory work fine together.
You'll get a conflict on the pg_stats-views, both patches add
statistics, but IIRC you can just ignore that and it works. I think
there's a conflict in regression tests as well.
Give it a shot and let me know if there's problems :).
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On 3/22/07, Heikki Linnakangas <heikki@enterprisedb.com> wrote:
Grzegorz Jaskiewicz wrote:
any idea how this patch is going to play with hot ? or should I just
give it a spin, and see if my world collapses :DI've run tests with both patches applied. I haven't tried with the
latest HOT-versions, but they should in theory work fine together.
You'll get a conflict on the pg_stats-views, both patches add
statistics, but IIRC you can just ignore that and it works. I think
there's a conflict in regression tests as well.Give it a shot and let me know if there's problems :).
Heikki, the signature of heap_fetch is changed slightly (we pass
a boolean to guide HOT-chain following) with HOT. That might
cause a conflict, I haven't tested though.
Grzegorz, if you can try HOT as well, that will be great.
Thanks,
Pavan
--
EnterpriseDB http://www.enterprisedb.com
On Mar 22, 2007, at 7:25 AM, Pavan Deolasee wrote:
Grzegorz, if you can try HOT as well, that will be great.
I tried, and it worked very well with 4.2 v of patch, as I remember.
My point was, since 'the day' comes closer, and you guys work on
close areas inside pg - I would like to be able to safely run both
patches.
I will give both a go, once I get some free time here.
--
Grzegorz Jaskiewicz
starving C/C++ freelance for hire