Bloom index cost model seems to be wrong
I stumbled upon this question:
https://dba.stackexchange.com/questions/229413
in a nutshell: the bloom index is not used with the example from the manual.
The bloom index is only used if either Seq Scan is disabled or if the random_page_cost is set to 1 (anything about 1 triggers a Seq Scan on my Windows laptop).
If parallel execution is disabled, then the bloom index is only used if the random_page_cost is lower than 4.
This does not use the index:
set random_page_cost = 4;
set max_parallel_workers_per_gather=0;
explain (analyze, buffers)
select *
from tbloom
where i2 = 898732
and i5 = 123451;
This uses the bloom index:
set random_page_cost = 3.5;
set max_parallel_workers_per_gather=0;
explain (analyze, buffers)
select *
from tbloom
where i2 = 898732
and i5 = 123451;
And this uses the index also:
set random_page_cost = 1;
explain (analyze, buffers)
select *
from tbloom
where i2 = 898732
and i5 = 123451;
This is the plan with when the index is used (either through "enable_seqscan = off" or "random_page_cost = 1")
Bitmap Heap Scan on tbloom (cost=138436.69..138440.70 rows=1 width=24) (actual time=42.444..42.444 rows=0 loops=1)
Recheck Cond: ((i2 = 898732) AND (i5 = 123451))
Rows Removed by Index Recheck: 2400
Heap Blocks: exact=2365
Buffers: shared hit=21973
-> Bitmap Index Scan on bloomidx (cost=0.00..138436.69 rows=1 width=0) (actual time=40.756..40.756 rows=2400 loops=1)
Index Cond: ((i2 = 898732) AND (i5 = 123451))
Buffers: shared hit=19608
Planning Time: 0.075 ms
Execution Time: 42.531 ms
And this is the plan when everything left at default settings:
Seq Scan on tbloom (cost=0.00..133695.80 rows=1 width=24) (actual time=1220.116..1220.116 rows=0 loops=1)
Filter: ((i2 = 898732) AND (i5 = 123451))
Rows Removed by Filter: 10000000
Buffers: shared hit=4697 read=58998
I/O Timings: read=354.670
Planning Time: 0.075 ms
Execution Time: 1220.144 ms
Can this be considered a bug in the cost model of the bloom index implementation?
Or is it expected that this is only used if random access is really cheap?
Thomas
Thomas Kellerer <spam_eater@gmx.net> writes:
The bloom index is only used if either Seq Scan is disabled or if the random_page_cost is set to 1 (anything about 1 triggers a Seq Scan on my Windows laptop).
Hm. blcostestimate is using the default cost calculation, except for
/* We have to visit all index tuples anyway */
costs.numIndexTuples = index->tuples;
which essentially tells genericcostestimate to assume that every index
tuple will be visited. This obviously is going to increase the cost
estimate; maybe there's something wrong with that?
I notice that the bloom regression test script is only testing queries
where it forces the choice of plan type, so it really doesn't prove
anything about whether the cost estimates are sane.
regards, tom lane
On Tue, Feb 12, 2019 at 10:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Thomas Kellerer <spam_eater@gmx.net> writes:
The bloom index is only used if either Seq Scan is disabled or if the
random_page_cost is set to 1 (anything about 1 triggers a Seq Scan on my
Windows laptop).Hm. blcostestimate is using the default cost calculation, except for
/* We have to visit all index tuples anyway */
costs.numIndexTuples = index->tuples;which essentially tells genericcostestimate to assume that every index
tuple will be visited. This obviously is going to increase the cost
estimate; maybe there's something wrong with that?
I assumed (without investigating yet) that genericcostestimate is applying
a cpu_operator_cost (or a few of them) on each index tuple, while the
premise of a bloom index is that you do very fast bit-fiddling, not more
expense SQL operators, for each tuple and then do the recheck only on what
survives to the table tuple part.
Cheers,
Jeff
On Tue, Feb 12, 2019 at 11:58 AM Jeff Janes <jeff.janes@gmail.com> wrote:
On Tue, Feb 12, 2019 at 10:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Hm. blcostestimate is using the default cost calculation, except for
/* We have to visit all index tuples anyway */
costs.numIndexTuples = index->tuples;which essentially tells genericcostestimate to assume that every index
tuple will be visited. This obviously is going to increase the cost
estimate; maybe there's something wrong with that?I assumed (without investigating yet) that genericcostestimate is applying
a cpu_operator_cost (or a few of them) on each index tuple, while the
premise of a bloom index is that you do very fast bit-fiddling, not more
expense SQL operators, for each tuple and then do the recheck only on what
survives to the table tuple part.
In order for bloom (or any other users of CREATE ACCESS METHOD, if there
are any) to have a fighting chance to do better, I think many of selfuncs.c
currently private functions would have to be declared in some header file,
perhaps utils/selfuncs.h. But that then requires a cascade of other
inclusions. Perhaps that is why it was not done.
Cheers,
Jeff
Show quoted text
Jeff Janes <jeff.janes@gmail.com> writes:
In order for bloom (or any other users of CREATE ACCESS METHOD, if there
are any) to have a fighting chance to do better, I think many of selfuncs.c
currently private functions would have to be declared in some header file,
perhaps utils/selfuncs.h. But that then requires a cascade of other
inclusions. Perhaps that is why it was not done.
I'm just in the midst of refactoring that stuff, so if you have
suggestions, let's hear 'em.
It's possible that a good cost model for bloom is so far outside
genericcostestimate's ideas that trying to use it is not a good
idea anyway.
regards, tom lane
On Tue, Feb 12, 2019 at 4:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Jeff Janes <jeff.janes@gmail.com> writes:
In order for bloom (or any other users of CREATE ACCESS METHOD, if there
are any) to have a fighting chance to do better, I think many ofselfuncs.c
currently private functions would have to be declared in some header
file,
perhaps utils/selfuncs.h. But that then requires a cascade of other
inclusions. Perhaps that is why it was not done.I'm just in the midst of refactoring that stuff, so if you have
suggestions, let's hear 'em.
The goal would be that I can copy the entire definition of
genericcostestimate into blcost.c, change the function's name, and get it
to compile. I don't know the correct way accomplish that. Maybe
utils/selfuncs.h can be expanded to work, or if there should be a new
header file like "utils/index_am_cost.h"
What I've done for now is:
#include "../../src/backend/utils/adt/selfuncs.c"
which I assume is not acceptable as a real solution.
It's possible that a good cost model for bloom is so far outside
genericcostestimate's ideas that trying to use it is not a good
idea anyway.
I think that might be the case. I don't know what the right answer would
look like, but I think it will likely end up needing to access everything
that genericcostestimate currently needs to access. Or if bloom doesn't,
some other extension implementing an ACCESS METHOD will.
Cheers,
Jeff
I've moved this to the hackers list, and added Teodor and Alexander of the
bloom extension, as I would like to hear their opinions on the costing.
On Tue, Feb 12, 2019 at 4:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
It's possible that a good cost model for bloom is so far outside
genericcostestimate's ideas that trying to use it is not a good
idea anyway.
I'm attaching a crude patch based over your refactored header files.
I just copied genericcostestimate into bloom, and made a few changes.
I think one change should be conceptually uncontroversial, which is to
change the IO cost from random_page_cost to seq_page_cost. Bloom indexes
are always scanned in their entirety.
The other change is not to charge any cpu_operator_cost per tuple. Bloom
doesn't go through the ADT machinery, it just does very fast
bit-twiddling. I could assign a fraction of a cpu_operator_cost,
multiplied by bloomLength rather than list_length(indexQuals), to this
bit-twiddling. But I think that that fraction would need to be quite close
to zero, so I just removed it.
When everything is in memory, Bloom still gets way overcharged for CPU
usage even without the cpu_operator_cost. This patch doesn't do anything
about that. I don't know if the default value of cpu_index_tuple_cost is
way too high, or if Bloom just shouldn't be charging the full value of it
in the first place given the way it accesses index tuples. For comparison,
when using a Btree as an equality filter on a non-leading column, most of
the time goes to index_getattr. Should the time spent there be loaded on
cpu_operator_cost or onto cpu_index_tuple_cost? It is not strictly spent
in the operator, but fetching the parameter to be used in an operator is
more closely a per-operator problem than a per-tuple problem.
Most of genericcostestimate still applies. For example, ScalarArrayOpExpr
handling, and Mackert-Lohman formula. It is a shame that all of that has
to be copied.
There are some other parts of genericcostestimate that probably don't apply
(OrderBy, for example) but I left them untouched for now to make it easier
to reconcile changes to the real genericcostestimate with the copy.
For ScalarArrayOpExpr, it would be nice to scan the index once and add to
the bitmap all branches of the =ANY in one index scan, but I don't see the
machinery to do that. It would be a matter for another patch anyway, other
than the way it would change the cost estimate.
Cheers,
Jeff
Attachments:
bloom_cost_v2.patchapplication/octet-stream; name=bloom_cost_v2.patchDownload
diff --git a/contrib/bloom/blcost.c b/contrib/bloom/blcost.c
index f9fe57f..6b6c759 100644
--- a/contrib/bloom/blcost.c
+++ b/contrib/bloom/blcost.c
@@ -13,10 +13,175 @@
#include "postgres.h"
#include "fmgr.h"
+#include "optimizer/cost.h"
+#include "optimizer/optimizer.h"
#include "utils/selfuncs.h"
+#include "utils/spccache.h"
#include "bloom.h"
+void
+genericcostestimate2(PlannerInfo *root,
+ IndexPath *path,
+ double loop_count,
+ GenericCosts *costs)
+{
+ IndexOptInfo *index = path->indexinfo;
+ List *indexQuals = get_quals_from_indexclauses(path->indexclauses);
+ List *indexOrderBys = path->indexorderbys;
+ Cost indexStartupCost;
+ Cost indexTotalCost;
+ Selectivity indexSelectivity;
+ double indexCorrelation;
+ double numIndexPages;
+ double numIndexTuples;
+ double spc_seq_page_cost;
+ double num_sa_scans;
+ double num_outer_scans;
+ double num_scans;
+ double qual_op_cost;
+ double qual_arg_cost;
+ List *selectivityQuals;
+ ListCell *l;
+
+ /*
+ * If the index is partial, AND the index predicate with the explicitly
+ * given indexquals to produce a more accurate idea of the index
+ * selectivity.
+ */
+ selectivityQuals = add_predicate_to_index_quals(index, indexQuals);
+
+ /*
+ * Check for ScalarArrayOpExpr index quals, and estimate the number of
+ * index scans that will be performed.
+ */
+ num_sa_scans = 1;
+ foreach(l, indexQuals)
+ {
+ RestrictInfo *rinfo = (RestrictInfo *) lfirst(l);
+
+ if (IsA(rinfo->clause, ScalarArrayOpExpr))
+ {
+ ScalarArrayOpExpr *saop = (ScalarArrayOpExpr *) rinfo->clause;
+ int alength = estimate_array_length(lsecond(saop->args));
+
+ if (alength > 1)
+ num_sa_scans *= alength;
+ }
+ }
+
+ /* Estimate the fraction of main-table tuples that will be visited */
+ indexSelectivity = clauselist_selectivity(root, selectivityQuals,
+ index->rel->relid,
+ JOIN_INNER,
+ NULL);
+
+ /* Bloom always reads all tuples */
+ numIndexTuples = costs->numIndexTuples;
+
+ /*
+ * Always estimate at least one tuple is touched
+ */
+ if (numIndexTuples < 1.0)
+ numIndexTuples = 1.0;
+
+ /* Bloom always reads all pages */
+ numIndexPages = index->pages;
+
+ /* fetch estimated page cost for tablespace containing index */
+ get_tablespace_page_costs(index->reltablespace,
+ NULL,
+ &spc_seq_page_cost);
+
+ /*
+ * Now compute the disk access costs.
+ *
+ * The above calculations are all per-index-scan. However, if we are in a
+ * nestloop inner scan, we can expect the scan to be repeated (with
+ * different search keys) for each row of the outer relation. Likewise,
+ * ScalarArrayOpExpr quals result in multiple index scans. This creates
+ * the potential for cache effects to reduce the number of disk page
+ * fetches needed. We want to estimate the average per-scan I/O cost in
+ * the presence of caching.
+ *
+ * We use the Mackert-Lohman formula (see costsize.c for details) to
+ * estimate the total number of page fetches that occur. While this
+ * wasn't what it was designed for, it seems a reasonable model anyway.
+ * Note that we are counting pages not tuples anymore, so we take N = T =
+ * index size, as if there were one "tuple" per page.
+ */
+ num_outer_scans = loop_count;
+ num_scans = num_sa_scans * num_outer_scans;
+
+ if (num_scans > 1)
+ {
+ double pages_fetched;
+
+ /* total page fetches ignoring cache effects */
+ pages_fetched = numIndexPages * num_scans;
+
+ /* use Mackert and Lohman formula to adjust for cache effects */
+ pages_fetched = index_pages_fetched(pages_fetched,
+ index->pages,
+ (double) index->pages,
+ root);
+
+ /*
+ * Now compute the total disk access cost, and then report a pro-rated
+ * share for each outer scan. (Don't pro-rate for ScalarArrayOpExpr,
+ * since that's internal to the indexscan.)
+ */
+ indexTotalCost = (pages_fetched * spc_seq_page_cost)
+ / num_outer_scans;
+ }
+ else
+ {
+ /*
+ * For a single index scan, we just charge spc_seq_page_cost per
+ * page touched.
+ */
+ indexTotalCost = numIndexPages * spc_seq_page_cost;
+ }
+
+ /*
+ * CPU cost: any complex expressions in the indexquals will need to be
+ * evaluated once at the start of the scan to reduce them to runtime keys
+ * to pass to the index AM (see nodeIndexscan.c). We model the per-tuple
+ * CPU costs as cpu_index_tuple_cost. No cpu_operator_cost is added
+ * as Bloom indexes do not use ADT operators. Because we have numIndexTuples as a per-scan
+ * number, we have to multiply by num_sa_scans to get the correct result
+ * for ScalarArrayOpExpr cases.
+ *
+ * Note: this neglects the possible costs of rechecking lossy operators.
+ * Detecting that that might be needed seems more expensive than it's
+ * worth, though, considering all the other inaccuracies here ...
+ */
+ qual_arg_cost = index_other_operands_eval_cost(root, indexQuals) +
+ index_other_operands_eval_cost(root, indexOrderBys);
+ qual_op_cost = cpu_operator_cost *
+ (list_length(indexQuals) + list_length(indexOrderBys));
+
+ indexStartupCost = qual_arg_cost;
+ indexTotalCost += qual_arg_cost;
+ indexTotalCost += numIndexTuples * num_sa_scans * (cpu_index_tuple_cost);
+
+ /*
+ * Generic assumption about index correlation: there isn't any.
+ */
+ indexCorrelation = 0.0;
+
+ /*
+ * Return everything to caller.
+ */
+ costs->indexStartupCost = indexStartupCost;
+ costs->indexTotalCost = indexTotalCost;
+ costs->indexSelectivity = indexSelectivity;
+ costs->indexCorrelation = indexCorrelation;
+ costs->numIndexPages = numIndexPages;
+ costs->numIndexTuples = numIndexTuples;
+ costs->num_sa_scans = num_sa_scans;
+}
+
/*
* Estimate cost of bloom index scan.
*/
@@ -35,7 +200,7 @@ blcostestimate(PlannerInfo *root, IndexPath *path, double loop_count,
costs.numIndexTuples = index->tuples;
/* Use generic estimate */
- genericcostestimate(root, path, loop_count, &costs);
+ genericcostestimate2(root, path, loop_count, &costs);
*indexStartupCost = costs.indexStartupCost;
*indexTotalCost = costs.indexTotalCost;
diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
index 2bc8f92..6517ecb 100644
--- a/src/backend/commands/createas.c
+++ b/src/backend/commands/createas.c
@@ -587,6 +587,9 @@ intorel_receive(TupleTableSlot *slot, DestReceiver *self)
/* We know this is a newly created relation, so there are no indexes */
+ /* Free the copied tuple. */
+ heap_freetuple(tuple);
+
return true;
}
On Sun, Feb 24, 2019 at 11:09 AM Jeff Janes <jeff.janes@gmail.com> wrote:
I've moved this to the hackers list, and added Teodor and Alexander of the
bloom extension, as I would like to hear their opinions on the costing.
My previous patch had accidentally included a couple lines of a different
thing I was working on (memory leak, now-committed), so this patch removes
that diff.
I'm adding it to the commitfest targeting v13. I'm more interested in
feedback on the conceptual issues rather than stylistic ones, as I would
probably merge the two functions together before proposing something to
actually be committed.
Should we be trying to estimate the false positive rate and charging
cpu_tuple_cost and cpu_operator_cost the IO costs for visiting the table to
recheck and reject those? I don't think other index types do that, and I'm
inclined to think the burden should be on the user not to create silly
indexes that produce an overwhelming number of false positives.
Cheers,
Jeff
Attachments:
bloom_cost_v3.patchapplication/octet-stream; name=bloom_cost_v3.patchDownload
diff --git a/contrib/bloom/blcost.c b/contrib/bloom/blcost.c
index f9fe57f..6b6c759 100644
--- a/contrib/bloom/blcost.c
+++ b/contrib/bloom/blcost.c
@@ -13,10 +13,175 @@
#include "postgres.h"
#include "fmgr.h"
+#include "optimizer/cost.h"
+#include "optimizer/optimizer.h"
#include "utils/selfuncs.h"
+#include "utils/spccache.h"
#include "bloom.h"
+void
+genericcostestimate2(PlannerInfo *root,
+ IndexPath *path,
+ double loop_count,
+ GenericCosts *costs)
+{
+ IndexOptInfo *index = path->indexinfo;
+ List *indexQuals = get_quals_from_indexclauses(path->indexclauses);
+ List *indexOrderBys = path->indexorderbys;
+ Cost indexStartupCost;
+ Cost indexTotalCost;
+ Selectivity indexSelectivity;
+ double indexCorrelation;
+ double numIndexPages;
+ double numIndexTuples;
+ double spc_seq_page_cost;
+ double num_sa_scans;
+ double num_outer_scans;
+ double num_scans;
+ double qual_op_cost;
+ double qual_arg_cost;
+ List *selectivityQuals;
+ ListCell *l;
+
+ /*
+ * If the index is partial, AND the index predicate with the explicitly
+ * given indexquals to produce a more accurate idea of the index
+ * selectivity.
+ */
+ selectivityQuals = add_predicate_to_index_quals(index, indexQuals);
+
+ /*
+ * Check for ScalarArrayOpExpr index quals, and estimate the number of
+ * index scans that will be performed.
+ */
+ num_sa_scans = 1;
+ foreach(l, indexQuals)
+ {
+ RestrictInfo *rinfo = (RestrictInfo *) lfirst(l);
+
+ if (IsA(rinfo->clause, ScalarArrayOpExpr))
+ {
+ ScalarArrayOpExpr *saop = (ScalarArrayOpExpr *) rinfo->clause;
+ int alength = estimate_array_length(lsecond(saop->args));
+
+ if (alength > 1)
+ num_sa_scans *= alength;
+ }
+ }
+
+ /* Estimate the fraction of main-table tuples that will be visited */
+ indexSelectivity = clauselist_selectivity(root, selectivityQuals,
+ index->rel->relid,
+ JOIN_INNER,
+ NULL);
+
+ /* Bloom always reads all tuples */
+ numIndexTuples = costs->numIndexTuples;
+
+ /*
+ * Always estimate at least one tuple is touched
+ */
+ if (numIndexTuples < 1.0)
+ numIndexTuples = 1.0;
+
+ /* Bloom always reads all pages */
+ numIndexPages = index->pages;
+
+ /* fetch estimated page cost for tablespace containing index */
+ get_tablespace_page_costs(index->reltablespace,
+ NULL,
+ &spc_seq_page_cost);
+
+ /*
+ * Now compute the disk access costs.
+ *
+ * The above calculations are all per-index-scan. However, if we are in a
+ * nestloop inner scan, we can expect the scan to be repeated (with
+ * different search keys) for each row of the outer relation. Likewise,
+ * ScalarArrayOpExpr quals result in multiple index scans. This creates
+ * the potential for cache effects to reduce the number of disk page
+ * fetches needed. We want to estimate the average per-scan I/O cost in
+ * the presence of caching.
+ *
+ * We use the Mackert-Lohman formula (see costsize.c for details) to
+ * estimate the total number of page fetches that occur. While this
+ * wasn't what it was designed for, it seems a reasonable model anyway.
+ * Note that we are counting pages not tuples anymore, so we take N = T =
+ * index size, as if there were one "tuple" per page.
+ */
+ num_outer_scans = loop_count;
+ num_scans = num_sa_scans * num_outer_scans;
+
+ if (num_scans > 1)
+ {
+ double pages_fetched;
+
+ /* total page fetches ignoring cache effects */
+ pages_fetched = numIndexPages * num_scans;
+
+ /* use Mackert and Lohman formula to adjust for cache effects */
+ pages_fetched = index_pages_fetched(pages_fetched,
+ index->pages,
+ (double) index->pages,
+ root);
+
+ /*
+ * Now compute the total disk access cost, and then report a pro-rated
+ * share for each outer scan. (Don't pro-rate for ScalarArrayOpExpr,
+ * since that's internal to the indexscan.)
+ */
+ indexTotalCost = (pages_fetched * spc_seq_page_cost)
+ / num_outer_scans;
+ }
+ else
+ {
+ /*
+ * For a single index scan, we just charge spc_seq_page_cost per
+ * page touched.
+ */
+ indexTotalCost = numIndexPages * spc_seq_page_cost;
+ }
+
+ /*
+ * CPU cost: any complex expressions in the indexquals will need to be
+ * evaluated once at the start of the scan to reduce them to runtime keys
+ * to pass to the index AM (see nodeIndexscan.c). We model the per-tuple
+ * CPU costs as cpu_index_tuple_cost. No cpu_operator_cost is added
+ * as Bloom indexes do not use ADT operators. Because we have numIndexTuples as a per-scan
+ * number, we have to multiply by num_sa_scans to get the correct result
+ * for ScalarArrayOpExpr cases.
+ *
+ * Note: this neglects the possible costs of rechecking lossy operators.
+ * Detecting that that might be needed seems more expensive than it's
+ * worth, though, considering all the other inaccuracies here ...
+ */
+ qual_arg_cost = index_other_operands_eval_cost(root, indexQuals) +
+ index_other_operands_eval_cost(root, indexOrderBys);
+ qual_op_cost = cpu_operator_cost *
+ (list_length(indexQuals) + list_length(indexOrderBys));
+
+ indexStartupCost = qual_arg_cost;
+ indexTotalCost += qual_arg_cost;
+ indexTotalCost += numIndexTuples * num_sa_scans * (cpu_index_tuple_cost);
+
+ /*
+ * Generic assumption about index correlation: there isn't any.
+ */
+ indexCorrelation = 0.0;
+
+ /*
+ * Return everything to caller.
+ */
+ costs->indexStartupCost = indexStartupCost;
+ costs->indexTotalCost = indexTotalCost;
+ costs->indexSelectivity = indexSelectivity;
+ costs->indexCorrelation = indexCorrelation;
+ costs->numIndexPages = numIndexPages;
+ costs->numIndexTuples = numIndexTuples;
+ costs->num_sa_scans = num_sa_scans;
+}
+
/*
* Estimate cost of bloom index scan.
*/
@@ -35,7 +200,7 @@ blcostestimate(PlannerInfo *root, IndexPath *path, double loop_count,
costs.numIndexTuples = index->tuples;
/* Use generic estimate */
- genericcostestimate(root, path, loop_count, &costs);
+ genericcostestimate2(root, path, loop_count, &costs);
*indexStartupCost = costs.indexStartupCost;
*indexTotalCost = costs.indexTotalCost;
Jeff Janes <jeff.janes@gmail.com> writes:
Should we be trying to estimate the false positive rate and charging
cpu_tuple_cost and cpu_operator_cost the IO costs for visiting the table to
recheck and reject those? I don't think other index types do that, and I'm
inclined to think the burden should be on the user not to create silly
indexes that produce an overwhelming number of false positives.
Heap-access costs are added on in costsize.c, not in the index
cost estimator. I don't remember at the moment whether there's
any explicit accounting for lossy indexes (i.e. false positives).
Up to now, there haven't been cases where we could estimate the
false-positive rate with any accuracy, so we may just be ignoring
the effect. But if we decide to account for it, I'd rather have
costsize.c continue to add on the actual cost, perhaps based on
a false-positive-rate fraction returned by the index estimator.
regards, tom lane
On Fri, Mar 1, 2019 at 7:11 AM Jeff Janes <jeff.janes@gmail.com> wrote:
I'm adding it to the commitfest targeting v13. I'm more interested in feedback on the conceptual issues rather than stylistic ones, as I would probably merge the two functions together before proposing something to actually be committed.
From the trivialities department, this patch shows up as a CI failure
with -Werror, because there is no declaration for
genericcostestimate2(). I realise that's just a temporary name in a
WIP patch anyway so this isn't useful feedback, but for the benefit of
anyone going through CI failures in bulk looking for things to
complain about: this isn't a real one.
--
Thomas Munro
https://enterprisedb.com
It's not clear to me what the next action should be on this patch. I
think Jeff got some feedback from Tom, but was that enough to expect a
new version to be posted? That was in February; should we now (in late
September) close this as Returned with Feedback?
--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Sep 25, 2019 at 05:12:26PM -0300, Alvaro Herrera wrote:
It's not clear to me what the next action should be on this patch. I
think Jeff got some feedback from Tom, but was that enough to expect a
new version to be posted? That was in February; should we now (in late
September) close this as Returned with Feedback?
That sounds rather right to me.
--
Michael